https://ijoml.com/index.php/ijoml/issue/feedInternational Journal of Machine Learning (IJOML)2026-01-28T00:00:00+07:00IJOMLijomljournal@gmail.comOpen Journal Systems<p data-start="140" data-end="548">The <strong data-start="144" data-end="197">International Journal of Machine Learning (IJOML)</strong> provides a global forum for disseminating high-quality, peer-reviewed research on theoretical foundations, methodological innovations, and applied advancements in machine learning. The journal emphasizes <strong data-start="402" data-end="454">transparency, reproducibility, and accessibility</strong> of data, algorithms, and processes to foster accountable and impactful scientific progress.</p> <p data-start="550" data-end="794">IJOML welcomes <strong data-start="565" data-end="618">original contributions, surveys, and case studies</strong> that enhance the understanding and application of machine learning in both academic and industrial contexts. The journal is published <strong data-start="753" data-end="791">twice a year, in June and December</strong>.</p> <table class="w-fit min-w-(--thread-content-width)" data-start="81" data-end="835"> <thead data-start="81" data-end="122"> <tr data-start="81" data-end="122"> <th data-start="81" data-end="107" data-col-size="sm"><strong data-start="83" data-end="106">Journal Information</strong></th> <th data-start="107" data-end="122" data-col-size="lg"><strong data-start="109" data-end="120">Details</strong></th> </tr> </thead> <tbody data-start="166" data-end="835"> <tr data-start="166" data-end="224"> <td data-start="166" data-end="187" data-col-size="sm"><strong data-start="168" data-end="186">Original Title</strong></td> <td data-start="187" data-end="224" data-col-size="lg">International Journal of Machine Learning</td> </tr> <tr data-start="225" data-end="252"> <td data-start="225" data-end="243" data-col-size="sm"><strong data-start="227" data-end="242">Short Title</strong></td> <td data-start="243" data-end="252" data-col-size="lg">IJOML</td> </tr> <tr data-start="253" data-end="299"> <td data-start="253" data-end="272" data-col-size="sm"><strong data-start="255" data-end="271">Abbreviation</strong></td> <td data-start="272" data-end="299" data-col-size="lg">International Journal of Machine Learning </td> </tr> <tr data-start="300" data-end="384"> <td data-start="300" data-end="316" data-col-size="sm"><strong data-start="302" data-end="315">Frequency</strong></td> <td data-start="316" data-end="384" data-col-size="lg">2 Issues per year (June and December)</td> </tr> <tr data-start="385" data-end="448"> <td data-start="385" data-end="401" data-col-size="sm"><strong data-start="387" data-end="400">Publisher</strong></td> <td data-start="401" data-end="448" data-col-size="lg">APJIKOM</td> </tr> <tr data-start="449" data-end="499"> <td data-start="449" data-end="459" data-col-size="sm"><strong data-start="451" data-end="458">DOI</strong></td> <td data-start="459" data-end="499" data-col-size="lg">10.52436/1.ijoml.year.vol.no.IDPaper</td> </tr> <tr data-start="500" data-end="526"> <td data-start="500" data-end="513" data-col-size="sm"><strong data-start="502" data-end="512">P-ISSN</strong></td> <td data-start="513" data-end="526" data-col-size="lg">xxxx-xxxx</td> </tr> <tr data-start="527" data-end="553"> <td data-start="527" data-end="540" data-col-size="sm"><strong data-start="529" data-end="539">e-ISSN</strong></td> <td data-start="540" data-end="553" data-col-size="lg">3124-6362</td> </tr> <tr data-start="554" data-end="701"> <td data-start="554" data-end="569" data-col-size="sm"><strong data-start="556" data-end="568">Indexing</strong></td> <td data-start="569" data-end="701" data-col-size="lg">-</td> </tr> <tr data-start="702" data-end="835"> <td data-start="702" data-end="719" data-col-size="sm"><strong data-start="704" data-end="718">Discipline</strong></td> <td data-start="719" data-end="835" data-col-size="lg">Machine Learning</td> </tr> </tbody> </table> <p> </p> <p><strong data-start="144" data-end="197">International Journal of Machine Learning (IJOML)</strong> has published papers from authors with different country. Diversity of author's in IJOML:</p> <p><strong>Vol. 1 No. 1, June 2026</strong> : Indonesia<img style="font-size: 0.875rem; font-family: 'Noto Sans', 'Noto Kufi Arabic', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;" src="https://publications.id/master/images/indonesia.png" width="20" />, Malaysia<img style="font-size: 0.875rem; font-family: 'Noto Sans', 'Noto Kufi Arabic', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;" src="https://publications.id/master/images/malaysia.png" width="20" />, Poland</p> <p> </p> <p> </p> <p> </p>https://ijoml.com/index.php/ijoml/article/view/2Evaluation of Undersampling and Oversampling Techniques in Term Deposit Prediction: A Gradient Boosting Approach2026-01-27T07:56:21+07:00Lasmedi Afuanlasmedi.afuan@unsoed.ac.idAbdul Karimabdullkarim@korea.ac.krIpung Permadiipung.permadi@unsoed.ac.id<p>Time deposits play a pivotal role in maintaining banking liquidity, yet telemarketing campaigns designed to secure them are often inefficient due to low response rates and untargeted outreach. The primary challenge in predictive marketing modeling lies in extreme data class imbalance, which renders standard algorithms prone to bias and leads to a failure in detecting potential customers. This study aims to validate the effectiveness of Gradient Boosting models and empirically evaluate the impact of various resampling techniques in mitigating class distribution disparities. The applied methodology encompasses the utilization of XGBoost, LightGBM, and CatBoost algorithms on the UCI Bank Marketing dataset, integrated with Random Under-Sampling, Random Over-Sampling, SMOTENC, and Tomek Links strategies. Experimental results reveal a significant trade-off between sensitivity and precision, wherein LightGBM paired with Random Under-Sampling achieved the highest detection capability with a Recall of 88.28%. Concurrently, the combination of CatBoost with Random Over-Sampling demonstrated the optimal balance, attaining an F1-Score of 0.6040, a Recall of 81.95%, and an AUC-ROC value reaching 0.9326. These findings offer a strategic contribution to bank management in selecting analytic approaches aligned with business priorities, whether the focus is on operational cost efficiency or aggressive market penetration to optimize customer acquisition.</p>2025-01-28T00:00:00+07:00Copyright (c) 2026 Lasmedi Afuan, Abdul Karim, Ipung Permadihttps://ijoml.com/index.php/ijoml/article/view/5Benchmarking Modern Optimizers for IndoBERT-Based Sentiment Analysis on Indonesian Gojek Reviews2026-01-27T08:09:57+07:00Randi Rizalrandirizal@unsil.ac.idHidayatulah Himawanhidayatulahhimawan@utem.edu<p>User reviews on platforms like Gojek serve as critical data for business intelligence, necessitating robust automated sentiment analysis models. While IndoBERT is the standard architecture for Indonesian natural language processing, the comparative impact of emerging optimizers on its performance remains underexplored, as most existing studies default to AdamW without investigating modern alternatives. This research comprehensively benchmarks five optimizers—AdamW, Muon, AdaMuon, Lion, and Sophia—by fine-tuning IndoBERT on 29,851 Indonesian Gojek reviews to identify the most effective training strategy. The study evaluates classification metrics alongside computational efficiency indicators, including training duration and peak memory usage. Empirical results demonstrate that AdamW, AdaMuon, and Lion achieve statistically equivalent superior performance, attaining an average accuracy of 91.6% and an F1-macro of 91.5%. Conversely, Muon and Sophia exhibit slightly lower predictive capability with higher resource demands. Regarding computational cost, AdamW and Lion provide the optimal balance of rapid convergence and memory efficiency, whereas Sophia demands significantly higher VRAM and matrix-based optimizers like Muon extend training duration. These findings confirm that AdamW remains the most robust and efficient choice for analyzing informal Indonesian text, indicating that the complex update mechanisms of newer optimizers do not yield necessary marginal gains for this specific classification task.</p>2026-01-28T00:00:00+07:00Copyright (c) 2026 Randi Rizal, Hidayatulah Himawanhttps://ijoml.com/index.php/ijoml/article/view/3RoBERTa with Sample Reweighting and Temperature Scaling for Imbalanced Toxicity Detection: A Performance–Fairness–Calibration Study2026-01-27T07:54:09+07:00Lasmedi Afuanlasmedi.afuan@unsoed.ac.idNurul Hidayatnurul@unsoed.ac.idAbdul Karimabdullkarim@korea.ac.kr<p>Detecting toxic language at scale requires models that are not only accurate but also robust to demographic subgroup bias and reliable in their probability estimates; however, these objectives can conflict, especially under severe class imbalance. This study investigates the performance–fairness–calibration interplay in toxicity detection using the Jigsaw Unintended Bias dataset (124,858 comments; 5.99% toxic; identity annotations in 9.39% of samples). We aim to quantify how sample reweighting and imbalance-aware training affect global discrimination, worst-subgroup behavior, and probabilistic calibration, and to assess post-hoc temperature scaling on predicted probabilities. We compare a TF-IDF + logistic regression baseline against RoBERTa variants trained without mitigation, with sample reweighting, and with an imbalance-oriented loss, using multi-metric evaluation (AUC, Min/Worst-Subgroup AUC, ECE, and NLL). RoBERTa consistently improves global AUC over the baseline (≈0.96 vs 0.9155) while worst-subgroup AUC remains substantially lower and varies modestly across RoBERTa variants (≈0.7726–0.7813). Calibration results indicate a marked gap between models: the baseline achieves the lowest ECE (0.0052), whereas RoBERTa exhibits higher ECE (≈0.0257) that increases further under reweighting and imbalance-oriented training (≈0.0490–0.0866), with NLL not improving consistently. These findings contribute empirical evidence that fairness-oriented interventions can shift error and calibration profiles, motivating holistic evaluation and methods that jointly constrain subgroup fairness and probabilistic reliability.</p>2026-01-28T00:00:00+07:00Copyright (c) 2026 Lasmedi Afuan, Nurul Hidayat, Abdul Karimhttps://ijoml.com/index.php/ijoml/article/view/4Cold-Start Generalization in Educational Interaction Data: Comparing Student-Wise and Question-Wise Splits with Probabilistic Calibration2026-01-26T21:23:41+07:00Purwadi Purwadipurwadi@amikompurwokerto.ac.idOthman Bin Mohdmothman@utem.eduNor Azman Bin Abunura@utem.edu.my<p>Predictive models in Intelligent Tutoring Systems often face performance degradation due to sparse data and the cold-start problem, further compounded by a lack of probability calibration in standard evaluations. This study bridges this gap by systematically evaluating the trade-off between discriminative accuracy and probabilistic reliability through student-wise and question-wise splits, utilizing interaction data from the MathE platform across eight countries. By comparing identifier-based and metadata-based Logistic Regression models under a Leave-One-Country-Out protocol, we assessed generalization capabilities against distribution shifts. The results reveal a fundamental dichotomy: while identifier-based models achieve superior precision (AUC 0.687) and calibration in scenarios with historical context, they suffer from significant performance drops in student cold-start settings and exhibit negative transfer during cross-country deployment. Conversely, metadata-based models demonstrate higher robustness and invariance across varying demographics. We conclude that relying solely on accuracy metrics masks model uncertainty in new domains and recommend a "safe-start" strategy that prioritizes metadata-based features for system initialization to ensure reliable pedagogical decision-making before personalizing based on accumulated user history.</p>2026-01-28T00:00:00+07:00Copyright (c) 2026 Purwadi, Nor Azman Bin Abu, Othman Bin Mohdhttps://ijoml.com/index.php/ijoml/article/view/6Stability-Aware Hierarchical Forecasting: Synergizing Conformal Prediction with Decomposition Ensembles2026-01-27T08:07:34+07:00Damar Nurcahyonodamarnc@polnes.ac.idRajiansyah Rajiansyahrajiansyah@pwr.edu.plHamdani Hamdanihamdani@unmul.ac.id<p>Accurate retail demand forecasting is frequently impeded by high-dimensional hierarchies and intermittent sales patterns, which destabilize traditional models and compromise operational decision-making. To address these challenges, this study develops a stability-aware forecasting framework that unifies global machine learning ensembles with hierarchical reconciliation and conformal uncertainty calibration. Utilizing the large-scale M5 dataset, the methodology synergizes decomposition-based feature engineering with a global Light Gradient Boosting Machine (LightGBM), reinforced by a robust Bottom-Up reconciliation strategy and Centered Conformalized Quantile Regression (CQR). Empirical results based on rolling-origin cross-validation demonstrate that the proposed framework achieves a superior Weighted Root Mean Squared Scaled Error (WRMSSE) of 8.7723, significantly outperforming both the standalone LightGBM (9.4846) and the Seasonal Naïve baseline (10.1740). Furthermore, the Centered CQR mechanism effectively balances predictive sharpness with coverage, attaining a Scaled Pinball Loss (SPL) of 0.2347, thereby mitigating error degradation often observed in sparse data regimes. These findings confirm that integrating structural decomposition with rigorous reconciliation acts as a potent regularizer, offering a scientifically robust solution for managing non-stationarity and signal sparsity in complex retail supply chains.</p>2026-01-28T00:00:00+07:00Copyright (c) 2026 Damar Nurcahyono, Rajiansyah Rajiansyah