
Loc Nguyen, Anum Shafiq (2019, January 29). Semimixture Regression Model for Incomplete Data
Type

Journal Article

Abstract

The regression expectation maximization (REM) algorithm, which is a variant of expectation maximization (EM) algorithm, uses parallelly a long regression model and many short regression models to solve the problem of incomplete data. Experimental results proved resistance of REM to incomplete data, in which accuracy of REM decreases insignificantly when data sample is made sparse with loss ratios up to 80%. However, the convergence speed of REM can be decreased if there are many independent variables. In this research, we use mixture model to decompose REM into many partial regression models. These partial regression models are then unified in the socalled semimixture regression model. Our proposed algorithm is called semimixture regression expectation maximization (SREM) algorithm because it is combination of mixture model and REM algorithm, but it does not implement totally the mixture model. In other words, only mixture coefficients in SREM are estimated according to mixture model whereas regression coefficients are estimated by REM. The experimental results show that SREM converges faster than REM does although the accuracy of SREM is not better than the accuracy of REM in fair tests.
Keywords: Regression Model, Mixture Regression Model, Expectation Maximization Algorithm, Incomplete Data.

Published

Adaptation and Personalization (ADP), Volume 1, Issue 1, pages 120. Publication date is January 29, 2019.
ISSN: , Open Access.
Editors: Timothy Schmutte.
Publisher: International Technology and Science Publications (ITS).

Identifiers

DOI: 10.31058/j.adp.2019.11001

Links

http://www.itspoa.com/itsadmin/Ll/LL.DE.asp?action=Paper_Information&id=1701

Citations

Nguyen, L., & Shafiq, A. (2019, January 29). Semimixture Regression Model for Incomplete Data. (T. Schmutte, Ed.) Adaptation and Personalization (ADP), 1(1), 120. doi:10.31058/j.adp.2019.11001

Cited


Indexed


Metrics


Categories

Statistics


Loc Nguyen, Anum Shafiq (2018, December 31). Mixture Regression Model for Incomplete Data
Type

Journal Article

Abstract

The Regression Expectation Maximization (REM) algorithm, which is a variant of Expectation Maximization (EM) algorithm, uses parallelly a long regression model and many short regression models to solve the problem of incomplete data. Experimental results proved resistance of REM to incomplete data, in which accuracy of REM decreases insignificantly when data sample is made sparse with loss ratios up to 80%. However, as traditional regression analysis methods, the accuracy of REM can be decreased if data varies complicatedly with many trends. In this research, we propose a socalled Mixture Regression Expectation Maximization (MREM) algorithm. MREM is the full combination of REM and mixture model in which we use two EM processes in the same loop. MREM uses the first EM process for exponential family of probability distributions to estimate missing values as REM does. Consequently, MREM uses the second EM process to estimate parameters as mixture model method does. The purpose of MREM is to take advantages of both REM and mixture model. Unfortunately, experimental result shows that MREM is less accurate than REM. However, MREM is essential because a different approach for mixture model can be referred by fusing linear equations of MREM into a unique curve equation.
Keywords: Fetal Weight Estimation, Regression Model, Ultrasound Measures, Expectation Maximization Algorithm, Missing Data.

Published

Revista Sociedade Científica, Volume 1, Issue 3, pages 125. Publication date is December 31, 2018.
ISSN: 25958402, Open Access.
Editors: Istael de Lima Espinosa.
Publisher: Sociedade Científica.

Identifiers

DOI: 10.5281/zenodo.2528978

Links

http://revista.scientificsociety.net/2018/12/31/mixtureregressionmodelforincompletedata

Citations

Nguyen, L., & Shafiq, A. (2018, December 31). Mixture Regression Model for Incomplete Data. (L. E. Istael, Ed.) Revista Sociedade Científica, 1(3), 125. doi:10.5281/zenodo.2528978

Cited


Indexed


Metrics


Categories

Statistics


Loc Nguyen, ThuHang T. Ho (2018, December 17). Fetal Weight Estimation in Case of Missing Data
Type

Journal Article

Abstract

Fetal weight estimation before delivery is important in obstetrics, which assists doctors diagnose abnormal or diseased cases. Linear regression based on ultrasound measures such as biparietal diameter (bpd), head circumference (hc), abdominal circumference (ac), and fetal length (fl) is common statistical method for weight estimation. There is a demand to retrieve regression model in case of incomplete data because taking ultrasound examinations is a hard task and early weight estimation is necessary in some cases. In this research, we proposed a socalled regression expectation maximization (REM) algorithm which is a combination of linear regression method and expectation maximization (EM) method to construct the regression model when both ultrasound measures and fetal weight are missing. The special technique in REM is to build parallelly an entire regression function and many partial inverse regression functions for solving the problem of highly sparse data, in which missing values are fulfilled by expectations relevant to both entire regression function and inverse regression functions. Experimental results proved resistance of REM to incomplete data, in which accuracy of REM decreases insignificantly when data sample is made sparse with loss ratios up to 80%.
Keywords: Fetal Weight Estimation, Regression Model, Ultrasound Measures, Expectation Maximization Algorithm, Missing Data.

Published

Experimental Medicine (EM), Volume 1, Issue 2, pages 4565. Publication date is December 17, 2018.
ISSN: , Open Access.
Editors: Timothy Schmutte.
Publisher: International Technology and Science Publications (ITS).

Identifiers

DOI: 10.31058/j.em.2018.12004

Links

http://www.itspoa.com/itsadmin/Ll/LL.DE.asp?action=Paper_Information&id=1630

Citations

Nguyen, L., & Ho, ThuHang T. (2018, December 17). Fetal Weight Estimation in Case of Missing Data. (T. Schmutte, Ed.) Experimental Medicine (EM), 1(2), 4565. doi:10.31058/j.em.2018.12004

Cited


Indexed


Metrics


Categories

Statistics, Medicine


Loc Nguyen, ThuHang Thi Ho (2018, August 1). Phoebe Framework and Experimental Results for Estimating Fetal Age and Weight
Type

Book Chapter

Abstract

Fetal age and weight estimation plays an important role in pregnant treatments. There are many estimation formulas created by combination of statistics and obstetrics. However, such formulas give optimal estimation if and only if they are applied into specified community. This research proposes a socalled Phoebe framework that supports physicians and scientists to find out most accurate formulas with regard to the community where scientists do their research. The builtin algorithm of Phoebe framework uses statistical regression technique for fetal age and weight estimation based on fetal ultrasound measures such as biparietal diameter, head circumference, abdominal circumference, fetal length, arm volume, and thigh volume. This algorithm is based on heuristic assumptions, which aims to produce good estimation formulas as fast as possible. From experimental results, the framework produces optimal formulas with high adequacy and accuracy. Moreover, the framework gives facilities to physicians and scientists for exploiting useful statistical information under pregnant data. Phoebe framework is a computer software available at http://phoebe.locnguyen.net.
Keywords: fetal age estimation, fetal weight estimation, ultrasound measures, regression model, estimation formula.

Published

In Thomas F. Heston (Author, Editor), eHealth  Making Health Care Smarter, chapter 7, pages 99123. Publication date is August 1, 2018.
ISBN online: 9781789235234. ISBN print: 9781789235227. DOI: 10.5772/intechopen.71820. Open Access. Paperback: 364 pages.
Publisher: InTechOpen.
Place: Janeza Trdine 9, 51000 Rijeka, Croatia.

Identifiers

DOI: 10.5772/intechopen.74883

Links

https://www.intechopen.com/books/ehealthmakinghealthcaresmarter/phoebeframeworkandexperimentalresultsforestimatingfetalageandweight

Citations

Nguyen, L., & Ho, ThuHang T. (2018, August 1). Phoebe Framework and Experimental Results for Estimating Fetal Age and Weight. In T. F. Heston, & T. F. Heston (Ed.), eHealth  Making Health Care Smarter (pp. 99123). Rijeka, Croatia: InTechOpen. doi:10.5772/intechopen.74883

Cited


Indexed


Metrics


Categories

Statistics, Medicine


Loc Nguyen (2018, July 10). Proposal of Evaluating Patients' Satisfaction about Quality of Healthcare System by Nonparametric Quality Control
Type

Preprinted Article

Abstract

Hospital quality assessment is key subject in hospital management. Patient satisfaction called criterion F is an important quality measure. It is necessary to use F as control factor to improve gradually the hospital management process. Because F is fuzzy, I propose a socalled nonparametric quality control (NQC) method that is combination of nonparametric test and quality control chart in order to estimate F. If F becomes a vector composed of many subcriteria then, NQC method is extended as multivariate NQC (MNQC) method used for evaluating many concerned opinions of patients. MNQC replaces the distance between observation and null hypothesis by their Pearson coefficient. This research is a proposal because I do not make experiments on NQC and MNQC yet.
Keywords: patient satisfaction, hospital management, nonparametric test, quality control chart, nonparametric quality control, multivariate nonparametric quality control.

Preprinted

OSF Preprints. Preprinted date is July 10, 2018.
Publisher: Open Science Framework (OSF).

Identifiers

DOI: 10.17605/OSF.IO/VUMHN

Links

https://osf.io/vumhn

Citations

Nguyen, L. (2018, July 10). Proposal of Evaluating Patients' Satisfaction about Quality of Healthcare System by Nonparametric Quality Control. Open Science Framework (OSF) Preprints. doi:10.17605/OSF.IO/VUMHN

Cited


Indexed

Google Scholar.

Metrics


Categories

Statistics


Loc Nguyen, ThuHang T. Ho (2018, May 7). Early Fetal Weight Estimation with Expectation Maximization Algorithm
Type

Journal Article

Abstract

Fetal weight estimation before delivery is important in obstetrics, which assists doctors diagnose abnormal or diseased cases. Linear regression based on ultrasound measures such as biparietal diameter (bpd), head circumference (hc), abdominal circumference (ac), and fetal length (fl) is common statistical method for weight estimation but the regression model requires that time points of collecting such measures must not be too far from last ultrasound scans. Therefore this research proposes a method of early weight estimation based on expectation maximization (EM) algorithm so that ultrasound measures can be taken at any time points in gestational period. In other words, gestational sample can lack some or many fetus weights, which gives facilities to practitioners because practitioners need not concern fetus weights when taking ultrasound examinations. The proposed method is called dual regression expectation maximization (DREM) algorithm. Experimental results indicate that accuracy of DREM decreases insignificantly when completion of ultrasound sample decreases significantly. So it is proved that DREM withstands missing values in incomplete sample or sparse sample.
Keywords: fetal weight estimation, regression model, ultrasound measures, expectation maximization algorithm.

Published

Experimental Medicine (EM), Volume 1, Issue 1, pages 1230. Publication date is May 7, 2018.
ISSN: , Open Access.
Editors: Timothy Schmutte.
Publisher: International Technology and Science Publications (ITS).

Preprinted

Preprints 2018, 2018030212 (doi: 10.20944/preprints201803.0212.v1). Preprinted date is March 26, 2018.
Publisher: Preprints.

Identifiers

DOI: 10.31058/j.em.2018.11002

Links

http://www.itspoa.com/itsadmin/Ll/LL.DE.asp?action=Paper_Information&id=1597

Citations

Nguyen, L., & Ho, ThuHang T. (2018, May 7). Early Fetal Weight Estimation with Expectation Maximization Algorithm. (T. Schmutte, Ed.) Experimental Medicine (EM), 1(1), 1230. doi:10.31058/j.em.2018.11002

Cited


Indexed


Metrics


Categories

Statistics, Medicine


Loc Nguyen, MinhPhung T. Do (2018, January 18). A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets
Type

Journal Article

Abstract

Collaborative filtering (CF) is a popular technique in recommendation study. Concretely, items which are recommended to user are determined by surveying her/his communities. There are two main CF approaches, which are memorybased and modelbased. I propose a new CF modelbased algorithm by mining frequent itemsets from rating database. Hence items which belong to frequent itemsets are recommended to user. My CF algorithm gives immediate response because the mining task is performed at offline processmode. I also propose another socalled Roller algorithm for improving the process of mining frequent itemsets. Roller algorithm is implemented by heuristic assumption “The larger the support of an item is, the higher it’s likely that this item will occur in some frequent itemset”. It models upon doing whitewash task, which rolls a roller on a wall in such a way that is capable of picking frequent itemsets. Moreover I provide enhanced techniques such as bit representation, bit matching and bit mining in order to speed up recommendation process. These techniques take advantages of bitwise operations (AND, NOT) so as to reduce storage space and make algorithms run faster.
Keywords: collaborative filtering, mining frequent itemsets, bit matching, bit mining.

Accepted

International Journal of Applied Mathematics and Machine Learning, Scientific Advances Publishers. Acceptance date is July 15, 2016.
ISSN: 23942258.
Editors: Li Li, Shuaiqi Liu, Mehmet Koc, José Luis LópezBonilla, Balan Sethuramalingam, Bin Guo, Loc Nguyen, Hind Rustum Mohammed Shaaban, Srinivas Nowduri.
Publisher: Scientific Advances Publishers, India.

Preprinted

PeerJ Preprints, volume 6, issue e26444v1. Preprinted date is January 18, 2018.
ISSN: 21679843.
Publisher: PeerJ.

Identifiers

DOI: 10.7287/peerj.preprints.26444v1

Links

https://peerj.com/preprints/26444

Citations

Nguyen, L., Do, M.P. T. (2018, January 18). A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets. PeerJ Preprints, 6(e26444v1). doi:10.7287/peerj.preprints.26444v1

Cited


Indexed

PeerJ Preprints: Crossref, Google Scholar.

Metrics


Categories

Computer Science


Loc Nguyen (2018, January 17). A Maximum Likelihood Mixture Approach for Multivariate Hypothesis Testing in case of Incomplete Data
Type

Journal Article

Abstract

Multivariate hypothesis testing becomes more and more necessary when data is in the process of changing from scalar and univariate format to multivariate format, especially financial and biological data is often constituted of ndimension vectors. Likelihood ratio test is the best method that applies the test on mean of multivariate sample with known or unknown covariance matrix but it is impossible to use likelihood ratio test in case of incomplete data when the data incompletion gets popular because of many reasons in reality. Therefore, this research proposes a new approach that gives an ability to apply likelihood ratio test into incomplete data. Instead of replacing missing values in incomplete sample by estimated values, this approach classifies incomplete sample into groups and each group is represented by a potential or partial distribution. All partial distributions are unified into a mixture model which is optimized via expectation maximization (EM) algorithm. Finally, likelihood ratio test is performed on mixture model instead of incomplete sample. This research provides a thorough description of proposed approach and mathematical proof that is necessary to such approach. The comparison of mixture model approach and filling missing values approach is also discussed in this research.
Keywords: maximum likelihood, mixture model, multivariate hypothesis testing, incomplete data.

Accepted

Journal of Mathematics and System Science (JMSS). Acceptance date is July 22, 2013.
ISSN online: 21595305, ISSN print: 21595291, Open Access.
Editors: Assia GuezaneLakoud, William P. Fox, Elisa Francomano, Sergo A. Episkoposian, Elizbar Nadaraya, Alexander Nikolaevich Raikov, Baha ŞEN, Claudio Cuevas, Wattanavadee Sriwattanapongse, Mohammad Mehdi Rashidi.
Publisher: David Publishing Company, USA.

Preprinted

OSF Preprints. Preprinted date is January 17, 2018.
Publisher: Open Science Framework.

Identifiers


Links

http://osf.io/whvq8

Citations

Nguyen, L. (2018, January 17). A Maximum Likelihood Mixture Approach for Multivariate Hypothesis Testing in case of Incomplete Data. Open Science Framework (OSF) Preprints. Retrieved from http://osf.io/whvq8

Cited


Indexed

JMSS: Academic Keys, BASE, CEPS, CQVIP, CSTJ, CiteFactor, CSA, DBH, EBSCO, EZB, getCITED, Google Scholar, Index Copernicus, InfoBase Index, InnoSpace, NSD, OCLC, Open JGate, PAIS, PBN, ProQuest, Scholar Steer, SHERPA, SIS, SJournal Index, Summon Serials Solutions, Turkish Education Index, UCSD, UDL, UlrichsWeb, WZB

Metrics


Categories

Statistics


Loc Nguyen (2018, January 17). Nonparametric Hypothesis Testing Report
Type

Study Report

Abstract

This report is the brief survey of nonparametric hypothesis testing. It includes four main sections about hypothesis testing, one additional section discussing goodnessoffit and conclusion section. Sign test section gives an overview of nonparametric testing, which begins with the test on sample median without assumption of normal distribution. Signedrank test section and ranksum test section concern improvements of sign test. The prominence of signedrank test is to be able to test sample mean based on the assumption about symmetric distribution. Ranksum test discards the task of assigning and counting plus signs and so it is the most effective method among ranking test methods. Nonparametric ANOVA section discusses application of analysis of variance (ANOVA) in nonparametric model. ANOVA is useful to compare and evaluate various data samples at the same time. Nonparametric goodnessfittest section, an additional section, focuses on different hypothesis, which measure the distribution similarity between two samples. It determines whether two samples have the same distribution without concerning how the form of distribution is. The last section is the conclusion. Note that in this report terms sample and data sample have the same meaning. A sample contains many data points. Each data point is also called an observation.
Keywords: nonparametric testing, nonparametric ANOVA.

Accepted

Science Journal Of Mathematics and Statistics (SJMS). Acceptance date is July 17, 2013.
ISSN: 22766324, Open Access.
Publisher: Science Journal Publication.

Preprinted

OSF Preprints. Preprinted date is January 17, 2018.
Publisher: Open Science Framework.

Awarded

Certified by Science Journal Publication

Identifiers


Links

http://osf.io/tj9cf

Citations

Nguyen, L. (2018, January 17). Nonparametric Hypothesis Testing Report. Open Science Framework (OSF) Preprints. Retrieved from http://osf.io/tj9cf

Cited


Indexed

SJMS: CrossRef, Gale, Google Scholar, Summon Serials Solutions, UlrichsWeb

Metrics


Categories

Statistics


Loc Nguyen, HaDuong Thi Phan (2017, November 2). Converting Graphic Relationships into Conditional Probabilities in Bayesian Network
Type

Book Chapter

Abstract

Bayesian network is a powerful mathematical tool for prediction and diagnosis applications. A large Bayesian network can be constituted of many simple networks which in turn are constructed from simple graphs. A simple graph consists of one child node and many parent nodes. The strength of each relationship between a child node and a parent node is quantified by a weight and all relationships share the same semantics such as prerequisite, diagnostic, and aggregation. The research focuses on converting graphic relationships into conditional probabilities in order to construct a simple Bayesian network from a graph. Diagnostic relationship is the main research object, in which sufficient diagnostic proposition is proposed for validating diagnostic relationship. Relationship conversion is adhered to logic gates such as AND, OR, and XOR, which is essential feature of the research.
Keywords: diagnostic relationship, Bayesian network, transformation coefficient.

Published

In Javier Prieto Tejedor (Author, Editor), Bayesian Inference, chapter 6, pages 97143. Publication date is November 2, 2017.
ISBN online: 9789535135784. ISBN print: 9789535135777. DOI: 10.5772/66264. Open Access. Paperback: 380 pages.
Publisher: InTechOpen.
Place: Janeza Trdine 9, 51000 Rijeka, Croatia.

Presented

The research was accepted as a short communication at the International Congress of Mathematicians 2018 (ICM 2018), held on August 1  9, 2018, Rio De Janeiro, Brasil.
I also represented the research at Workshop of Graph Theory and Applications with Prof. Dr. HaDuong Thi Phan. The workshop was held on November 15  16, 2018, Hanoi, Vietnam, organized by Vietnam Institute of Mathematics, Vietnam Institute for Advanced Study in Mathematics (VIASM), and Hanoi University of Science and Technology (HUST).

Identifiers

DOI: 10.5772/intechopen.70057

Links

https://www.intechopen.com/books/bayesianinference/convertinggraphicrelationshipsintoconditionalprobabilitiesinbayesiannetwork

Citations

Nguyen, L. (2017, November 2). Converting Graphic Relationships into Conditional Probabilities in Bayesian Network. In J. P. Tejedor, & J. P. Tejedor (Ed.), Bayesian Inference (pp. 97143). Rijeka, Croatia: InTechOpen. doi:10.5772/intechopen.70057

Cited


Indexed


Metrics


Categories

Mathematics, Computer Science


Loc Nguyen (2017, April 24). A Proposal of Loose Asymmetric Cryptography Algorithm
Type

Conference Paper

Abstract

Although traditional asymmetric algorithm along with its implementations are successful in keeping important documents confidential, it uses only one private key. This research proposes the loose asymmetric (LA) algorithm to satisfy requirement of generating many access keys. Each access key is granted to only one user. This demand is real because a group of members needs to retrieve same documents but each member requires confidentiality in access. Because implementation of LA algorithm is complicated, I also propose two schemes of how to deploy LA algorithm. The research is a proposal because I do not make experiment on LA algorithm yet.
Keywords: Asymmetric algorithm, Linear algebra, Mask matrix, Access key.

Published

Proceedings of The 2nd International Conference on Software, Multimedia and Communication Engineering (SMCE 2017), DEStech Transactions on Computer Science and Engineering, pages 414422.
ISBN: 9781605954585, ISSN: 24758841.
Publisher: DEStech.
Place and date: Shanghai, China, April 2324, 2017.

Identifiers

DOI: 10.12783/dtcse/smce2017/12462

Links

http://dpiproceedings.com/index.php/dtcse/article/view/12462

Citations

Nguyen, L. (2017, April 24). A Proposal of Loose Asymmetric Cryptography Algorithm. Proceedings of The 2nd International Conference on Software, Multimedia and Communication Engineering (SMCE 2017), DEStech Transactions on Computer Science and Engineering (pp. 414422). Shanghai: DEStech. doi:10.12783/dtcse/smce2017/12462

Cited


Indexed

CNKI, Ei, Thomson Reuters  CPCI

Metrics


Categories

Computer Science, Mathematics


Loc Nguyen (2017, June 9). Global Optimization with Descending Region Algorithm
Type

Journal Article

Abstract

Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a wellknown technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a socalled descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a socalled descending region under such local minimizer. Descending region is begun by a socalled descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a socalled simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a socalled RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
Keywords: Global Optimization, Gradient Descent Method, Descending Region, Descending Point.

Published

Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), Volume 6, Issue 41, pages 7282. Publication date is June 9, 2017.
ISSN print: 23285605, ISSN online: 23285613, Open Access.
Editors of Special Issue: L. Nguyen, & et al.
Publisher: Science Publishing Group.

Presented

The 3rd Science & Technology Conference of Ho Chi Minh University of Food & Industry, held on July 4, 2017, Ho Chi Minh, Vietnam.

Identifiers

DOI: 10.11648/j.acm.s.2017060401.17

Links

http://www.sciencepublishinggroup.com/journal/paperinfo?journalid=147&doi=10.11648/j.acm.s.2017060401.17

Citations

Nguyen, L. (2017, June 9). Global Optimization with Descending Region Algorithm. (L. Nguyen, & et al., Eds.) Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), 6(41), 7282. doi:10.11648/j.acm.s.2017060401.17

Cited


Indexed

Academic Keys, ARDI, CNKI, CrossRef, DRJI, EZB, Journal Seek, MIAR, PBN, Research Bible, WorldCat, WZB, ZDB, Zentralblatt MATH.

Metrics


Categories

Mathematics


Loc Nguyen, ThuHang T. Ho (2017, March 13). Experimental Results of Phoebe Framework: Optimal Formulas for Estimating Fetus Weight and Age
Type

Journal Article

Abstract

Fetal age and weight estimation plays an important role in pregnant treatments. There are many estimate formulas created by combination of statistics and obstetrics. However, such formulas give optimal estimation if and only if they are applied into specified community. We proposed a socalled Phoebe framework that supports scientists to find out most accurate formulas with regard to the community where scientists do their research. Now we compose this paper that focuses on using Phoebe framework to derive optimal formulas from experimental results. In other words, this paper is an evaluation of Phoebe framework.
Keywords: fetal age estimation, fetal weight estimation, regression model, estimate formula.

Published

Journal of Community & Public Health Nursing, Volume 3, Issue 2, pages 15. Publication date is March 13, 2017.
ISSN: 24719846, Open Access.
Editors: Helen J. C. Shaji, Mari Carmen Portillo, Martin M. Zdanowicz.
Publisher: OMICS.

Presented

Poster presentation at The 7th Mekong Delta Obstetrics & Gynaecology Conference, held on July 29, 2017, Can Tho, Vietnam.

Identifiers

DOI: 10.4172/24719846.1000163

Links

https://goo.gl/qiMOXh

Citations

Nguyen, L., & Ho, ThuHang T. (2017, March 13). Experimental Results of Phoebe Framework: Optimal Formulas for Estimating Fetus Weight and Age. (H. J. Shaji, M. C. Portillo, & M. M. Zdanowicz, Eds.) Journal of Community & Public Health Nursing, 3(2), 15. doi:10.4172/24719846.1000163

Cited


Indexed

EBSCO, Hamdard University, Publons, RefSeek, WorldCat

Metrics


Categories

Statistics, Medicine


Loc Nguyen (2016, October 31). Beta Likelihood Estimation in Learning Bayesian Network Parameter
Type

Book Chapter

Abstract

Maximum likelihood estimation (MLE) is a popular technique of statistical parameter estimation. When random variable conforms beta distribution, the research focuses on applying MLE into beta density function. This method is called beta likelihood estimation (BLE), which results out useful estimation equations. It is easy to calculate statistical estimates based on these equations in case that parameters of beta distribution are positive integer numbers. Although I wrote two articles relevant to BLE and its application to specify prior probabilities of Bayesian network, this chapter is a full report of BLE in learning Bayesian network parameter, which takes advantages of interesting features of analytic functions such as gamma, digamma, and trigamma.
Keywords: Maximum likelihood estimation, beta distribution, beta likelihood estimation, gamma function, Bayesian network parameter.

Published

In United Scholars Publications (Author), Advances in Computer Networks and Information Technology (Vol. II), pages 1782. Publication date is October 31, 2016.
ISBN10: 1539855228. ISBN13: 9781539855224. Paperback: 214 pages
Publisher: CreateSpace Independent Publishing Platform.

Identifiers


Links

https://goo.gl/R4dwrf

Citations

Nguyen, L. (2016, October 31). Beta Likelihood Estimation in Learning Bayesian Network Parameter. In United Scholars Publications (Author), Advances in Computer Networks and Information Technology (Vol. II). USA: CreateSpace Independent Publishing Platform. Retrieved from https://goo.gl/R4dwrf

Cited


Indexed


Metrics


Categories

Mathematics


Loc Nguyen (2016, October 18). Estimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution
Type

Journal Article

Abstract

The Tscore is very important to diagnosis of osteoporosis and its formula is calculated from two parameters such as peak bone mineral density (pBMD) and variance of pBMD. This research proposes a new method to estimate these parameters with recognition that pBMD conforms maximum distributions, for instance, Gumbel distribution. Firstly, my method models pBMD sample as a series of maximum values and such values are assumed to obey Gumbel distribution. Secondly, I apply moment technique to estimate mean and variance of Gumbel distribution with regard to the series of maximum values. These mean and variance are adjusted to become the best estimates of pBMD and pBMD variance. There is no normality assumption in this research because pBMD is essentially the extreme value with low frequency in population and so the estimate of pBMD will be more accurate if we take full advantage of specific characteristics of Gumbel maximum distribution.
Keywords: Peak Bone Mineral Density, Osteoporosis Diagnosis, Gumbel Maximum Distribution.

Published

International Journal of Clinical Medicine Research (IJCMR), Vol. 3, No. 5, pages 7680. Publication date is October 18, 2016.
ISSN: 23753838, Open Access.
Editors: Editorial Board.
Publisher: American Association for Science and Technology (AASCIT).

Identifiers


Links

http://www.aascit.org/journal/archive2?journalId=906&paperId=4532

Citations

Nguyen, L. (2016, October 18). Estimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution. International Journal of Clinical Medicine Research (IJCMR), 3(5), 7680. Retrieved from http://www.aascit.org/journal/archive2?journalId=906&paperId=4532

Cited


Indexed

Academic Keys, DRJI, MIAR, PBN, Research Bible, WorldCat

Metrics


Categories

Statistics, Medicine


Loc Nguyen (2016, August 18). A New Awarecontext Collaborative Filtering Approach by Applying Multivariate Logistic Regression Model into General User Pattern
Type

Journal Article

Abstract

Traditional collaborative filtering (CF) does not take into account contextual factors such as time, place, companion, environment, etc. which are useful information around users or relevant to recommender application. So, recent awarecontext CF takes advantages of such information in order to improve the quality of recommendation. There are three main awarecontext approaches: contextual prefiltering, contextual postfiltering and contextual modeling. Each approach has individual strong points and drawbacks but there is a requirement of steady and fast inference model which supports the awarecontext recommendation process. This paper proposes a new approach which discovers multivariate logistic regression model by mining both traditional rating data and contextual data. Logistic model is optimal inference model in response to the binary question “whether or not a user prefers a list of recommendations with regard to contextual condition”. Consequently, such regression model is used as a filter to remove irrelevant items from recommendations. The final list is the best recommendations to be given to users under contextual information. Moreover the searching items space of logistic model is reduced to smaller set of items socalled general user pattern (GUP). GUP supports logistic model to be faster in realtime response.
Keywords: awarecontext collaborative filtering, logistic regression model.

Published

Journal of Data Analysis and Information Processing (JDAIP), Vol.4 No.3, August 2016, pages 124131. Publication date is August 18, 2016.
ISSN online: 23277203, ISSN print: 23277211, Open Access.
EditorinChief: Feng Shi.
Publisher: Scientific Research Publishing (SCIRP).

Identifiers

DOI: 10.4236/jdaip.2016.43011

Links

http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=69840

Citations

Nguyen, L. (2016, August 18). A New Awarecontext Collaborative Filtering Approach by Applying Multivariate Logistic Regression Model into General User Pattern. (F. Shi, Ed.) Journal of Data Analysis and Information Processing (JDAIP), 4(3), 124131. doi:10.4236/jdaip.2016.43011

Cited


Indexed

Academic Keys, ARDI, Blyun, CALIS, CNKI Scholar, CNPLINKER, CrossRef, EBSCO, EZB, Google Scholar, iScholar, Open Access Library, Open JGate, PBN, Research Bible, SciLit, SHERPA, WorldCat, ZDB.

Metrics

The 2year Googlebased Journal Impact Factor 2GJIF (July 2016): 1.04

Categories

Statistics, Computer Science


Loc Nguyen (2016, June 17). Longestpath Algorithm to Solve Uncovering Problem of Hidden Markov Model
Type

Journal Article

Abstract

Uncovering problem is one of three main problems of hidden Markov model (HMM), which aims to find out optimal state sequence that is most likely to produce a given observation sequence. Although Viterbi is the best algorithm to solve uncovering problem, I introduce a new viewpoint of how to solve HMM uncovering problem. The proposed algorithm is called longestpath algorithm in which the uncovering problem is modeled as a graph. So the essence of longestpath algorithm is to find out the longest path inside the graph. The optimal state sequence which is solution of uncovering problem is constructed from such path.
Keywords: Hidden Markov Model, Uncovering Problem, Longestpath Algorithm.

Published

Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), Vol. 6, No. 41, pages 3947. Publication date is June 17, 2016.
ISSN print: 23285605, ISSN online: 23285613, Open Access.
Editors of Special Issue: Loc Nguyen, Dr. Mohamed Arezki MELLAL.
Publisher: Science Publishing Group.

Identifiers

DOI: 10.11648/j.acm.s.2017060401.13

Links

http://www.sciencepublishinggroup.com/journal/paperinfo?journalid=147&doi=10.11648/j.acm.s.2017060401.13

Citations

Nguyen, L. (2016, June 17). Longestpath Algorithm to Solve Uncovering Problem of Hidden Markov Model. (L. Nguyen, & M. A. MELLAL, Eds.) Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), 6(41), 3947. doi:10.11648/j.acm.s.2017060401.13

Cited


Indexed

Academic Keys, ARDI, CNKI, CrossRef, DRJI, EZB, Journal Seek, MIAR, PBN, Research Bible, WorldCat, WZB, ZDB, Zentralblatt MATH.

Metrics


Categories

Mathematics, Computer Science


Loc Nguyen (2016, June 17). Tutorial on Hidden Markov Model
Type

Journal Article

Abstract

Hidden Markov model (HMM) is a powerful mathematical tool for prediction and recognition. Many computer software products implement HMM and hide its complexity, which assist scientists to use HMM for applied researches. However comprehending HMM in order to take advantages of its strong points requires a lot of efforts. This report is a tutorial on HMM with full of mathematical proofs and example, which help researchers to understand it by the fastest way from theory to practice. The report focuses on three common problems of HMM such as evaluation problem, uncovering problem, and learning problem, in which learning problem with support of optimization theory is the main subject.
Keywords: Hidden Markov Model, Optimization, Evaluation Problem, Uncovering Problem, Learning Problem.

Published

Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), Vol. 6, No. 41, pages 1638. Publication date is June 17, 2016.
ISSN print: 23285605, ISSN online: 23285613, Open Access.
Editors of Special Issue: Loc Nguyen, Dr. Mohamed Arezki MELLAL.
Publisher: Science Publishing Group.

Identifiers

DOI: 10.11648/j.acm.s.2017060401.12

Links

http://www.sciencepublishinggroup.com/journal/paperinfo?journalid=147&doi=10.11648/j.acm.s.2017060401.12

Citations

Nguyen, L. (2016, June 17). Tutorial on Hidden Markov Model. (L. Nguyen, & M. A. MELLAL, Eds.) Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), 6(41), 1638. doi:10.11648/j.acm.s.2017060401.12

Cited


Indexed

Academic Keys, ARDI, CNKI, CrossRef, DRJI, EZB, Journal Seek, MIAR, PBN, Research Bible, WorldCat, WZB, ZDB, Zentralblatt MATH

Metrics


Categories

Mathematics, Computer Science


Loc Nguyen (2016, June 17). Tutorial on Support Vector Machine
Type

Journal Article

Abstract

Support vector machine is a powerful machine learning method in data classification. Using it for applied researches is easy but comprehending it for further development requires a lot of efforts. This report is a tutorial on support vector machine with full of mathematical proofs and example, which help researchers to understand it by the fastest way from theory to practice. The report focuses on theory of optimization which is the base of support vector machine.
Keywords: Support Vector Machine, Optimization, Separating Hyperplane, Sequential Minimal Optimization.

Published

Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), Vol. 6, No. 41, pages 115. Publication date is June 17, 2016.
ISSN print: 23285605, ISSN online: 23285613, Open Access.
Editors of Special Issue: Loc Nguyen, Dr. Mohamed Arezki MELLAL.
Publisher: Science Publishing Group.

Identifiers

DOI: 10.11648/j.acm.s.2017060401.11

Links

http://www.sciencepublishinggroup.com/journal/paperinfo?journalid=147&doi=10.11648/j.acm.s.2017060401.11

Citations

Nguyen, L. (2016, June 17). Tutorial on Support Vector Machine. (L. Nguyen, & M. A. MELLAL, Eds.) Special Issue “Some Novel Algorithms for Global Optimization and Relevant Subjects”, Applied and Computational Mathematics (ACM), 6(41), 115. doi:10.11648/j.acm.s.2017060401.11

Cited

Google Scholar Cited by (February, 2019): 3, 2, 1.

Indexed

Academic Keys, ARDI, CNKI, CrossRef, DRJI, EZB, Journal Seek, MIAR, PBN, Research Bible, WorldCat, WZB, ZDB, Zentralblatt MATH

Metrics


Categories

Mathematics, Computer Science


Loc Nguyen (2016, June 10). Continuous Observation Hidden Markov Model
Type

Journal Article

Abstract

Hidden Markov model (HMM) is a powerful mathematical tool for prediction and recognition but it is not easy to understand deeply its essential disciplines. Previously, I made a full tutorial on HMM in order to support researchers to comprehend HMM. However HMM goes beyond what such tutorial mentioned when observation may be signified by continuous value such as real number and real vector instead of discrete value. Note that state of HMM is always discrete event but continuous observation extends capacity of HMM for solving complex problems. Therefore, I do this research focusing on HMM in case that its observation conforms to a single probabilistic distribution. Moreover, mixture HMM in which observation is characterized by the mixture model of partial probability density functions is also mentioned. Mathematical proofs and practical techniques relevant to continuous observation HMM are main subjects of the research.
Keywords: hidden Markov model, continuous observation, mixture model, evaluation problem, uncovering problem, learning problem.

Published

Kasmera Journal, Vol. 44 (no 6, Year 2016), pages 65149. Publication date is June 10, 2016
ISSN: 00755222
Editorinchief: Prof. Dr. Miguel G. De Garcia.
Publisher: ENFERMEDADES INFECCIOSAS Y TROPICALES, MARACAIBO, VENEZUELA.

Awarded

Good evaluation from Kasmera Journal

Identifiers


Links

http://kasmerajournal.com/kasmer/index.php/archive/part/44/6/1/?currentVol=44¤tissue=6

Citations

Nguyen, L. (2016, June 10). Continuous Observation Hidden Markov Model. (M. G. De Garcia, Ed.) Kasmera Journal, 44(6), 65149.

Cited


Indexed

Academic Search Premier, CIRC, DOAJ, Fuente Academica Premier, Latindex, Scopus, SJR, Thomson ReutersSCIE

Metrics

JCR 2014: Impact factor = 0.071, 5Year impact factor = 0.071, Eigen factor Score = 0.00001, Article Influence Score = 0.011

Categories

Mathematics

Charged

Purchase on Kasmera Journal.


Loc Nguyen (2016, April 27). Beta Likelihood Estimation and Its Application to Specify Prior Probabilities in Bayesian Network
Type

Journal Article

Abstract

Maximum likelihood estimation (MLE) is a popular technique of statistical parameter estimation. When random variable conforms beta distribution, the research focuses on applying MLE into beta density function. This method is called beta likelihood estimation, which results out useful estimation equations. It is easy to calculate statistical estimates based on these equations in case that parameters of beta distribution are positive integer numbers. Essentially, the method takes advantages of interesting features of functions gamma, digamma, and trigamma. An application of beta likelihood estimation is to specify prior probabilities in Bayesian network.
Keywords: maximum likelihood estimation, beta distribution, beta likelihood estimation, gamma function.

Published

British Journal of Mathematics & Computer Science, Volume 16, Issue 3, page 121. Publication date is April 27, 2016.
ISSN: 22310851, DOI: 10.9734/bjmcs, Open Access.
Editors and Reviewers: H. M. Srivastava, Paul Bracken, Radosław Jedynak, anonymous, S. Zimeras.
Publisher: SCIENCEDOMAIN international.

Identifiers

DOI: 10.9734/BJMCS/2016/25731
SDI Article No: BJMCS.25731

Links

http://sciencedomain.org/abstract/14364
Peerreview History

Citations

Nguyen, L. (2016, April 27). Beta Likelihood Estimation and Its Application to Specify Prior Probabilities in Bayesian Network. (H. M. Srivastava, P. Bracken, R. Jedynak, anonymous, & S. Zimeras, Eds.) British Journal of Mathematics & Computer Science, 16(3), 121. doi:10.9734/BJMCS/2016/25731.

Indexed

BIBSYS, CrossRef, DOAJ, EJournals.Org, EBSCO, Google Scholar, Journal Seek, Open JGate, Polish Ministry of Science and Higher Education, ProQuest, Research Bible, SHERPA, The Knowledge Network, UlrichsWeb, WorldCat, WRLC Catalog, WZB, Zentralblatt MATH

Metrics

SDI Average Peer review marks at publication stage: 8.5/10
Being applied for Thomson Reuters indexing

Categories

Mathematics


Loc Nguyen (2016, April 8). New version of CAT algorithm by maximum likelihood estimation
Type

Journal Article

Abstract

Computerbased testing with support of internet and computer is better than traditional paperbased testing. Computerized Adaptive Testing (CAT) is the branch of computerbased testing but it improves the accuracy of test core when CAT system tries to choose items such as tests, exams, and questions that are suitable to examinees’ abilities. I propose an advanced CAT algorithm based on two mathematical findings such as equation to estimate ability of a given examinee and equation to estimate ability variance among examinees. Such two mathematical equations are derived from maximum likelihood estimation of Item Response Function. The advanced CAT algorithm aims to classify examinees by the best way according to these equations.
Keywords: Computerized Adaptive Testing, CAT, Item Response Function, Maximum Likelihood Estimation, ability estimate, ability variance.

Published

Sylwan Journal, Volume 160, Issue 4, April 2016, pages 218244.
ISSN: 00397660
EditorinChief: B. N. Buszewski.
Publisher: Sylwan Journal, UL BITWY WARSZAWSKIEJ 1920 R NR 3, WARSZAWA, POLAND, PL02 362.

Awarded

Excellent evaluation from Sylwan Journal

Identifiers


Links

http://sylwan.ibles.org/syl/index.php/archive/part/160/4/1/?currentVol=160¤tissue=4

Citations

Nguyen, L. (2016, April 8). New version of CAT algorithm by maximum likelihood estimation. (B. N. Buszewski, Ed.) Sylwan Journal, 160(4), 218244.

Cited


Indexed

CABI, PSJC, Thomson ReutersSCIE

Metrics

JCR 2016: Impact factor = 0.295, ImmediacyIndex = 0.12, 5YIF = 0.251, Cited Halflife > 10.0

Categories

Mathematics, Statistics

Charged

Purchase on Sylwan Journal.


Loc Nguyen (2016, March 28). Theorem of SIGMAgate Inference in Bayesian Network
Type

Journal Article

Abstract

Bayesian network is a powerful mathematical tool for doing diagnosis and assessment tasks. Parameter learning in Bayesian network is complicated study but I recognize that parameter learning becomes easy in some situations. Especially, when Bayesian network is weighted graph and its child node is aggregation of mutually independent parent nodes, there is a simple way to specify conditional probability tables which are parameters of Bayesian network. In this research, I propose and prove the theorem of SIGMAgate inference which is the fundamental of such simple way helping us to transform weighted graph into Bayesian network. Note that the theorem is derived from works of authors Millán and PérezdelaCruz in their article “A Bayesian Diagnostic Algorithm for Student Modeling and its Evaluation” published in User Modeling and UserAdapted Interaction Journal on June 2002.
Keywords: Bayesian network, parameter learning, SIGMAgate inference.

Published

Wulfenia Journal, Volume 23, Issue 3, pages 280289. Publication date is March 28, 2016.
ISSN: 1561882X
EditorinChief: Prof. Dr. Vienna S. Franz
Contact: Wulfenia Journal, LANDESMUSEUM KARNTEN, MUSEUMGASSE 2, KLAGENFURT, AUSTRIA, A9021.

Identifiers


Links

http://www.multidisciplinarywulfenia.org/auto/index.php/archive/part/23/3/1/?currentVol=23¤tissue=3

Citations

Nguyen, L. (2016, March 28). Theorem of SIGMAgate Inference in Bayesian Network. (V. S. Franz, Ed.) Wulfenia Journal, 23(3), 280289.

Cited


Indexed

BIOSIS Previews, Thomson ReutersSCIE

Metrics

JCR 2016 release: Impact factor = 0.267, Eigenfactor = 0.00003

Categories

Mathematics

Charged

Purchase on Wulfenia Journal.


Loc Nguyen (2016, February). Specifying Prior Probabilities in Bayesian Network by Maximum Likelihood Estimation method
Type

Journal Article

Abstract

Bayesian network provides the solid inference mechanism when convincing the hypothesis by collecting evidences. Bayesian network is instituted of two models such as qualitative model quantitative model. The qualitative model is its structure and the quantitative model is its parameters, namely conditional probability tables (CPT) whose entries are probabilities quantifying the dependences among variables in network. The quality of CPT depends on the initialized values of its entries. Such initial values are prior probabilities. Because the beta function provides some conveniences when specifying CPT (s), this function is used as the basic distribution in my method. The main problem of defining prior probabilities is how to estimate parameters in beta distribution. It is slightly unfortunate when the equations whose solutions are parameter estimators are differential equations and it is too difficult to solve them. By applying the maximum likelihood estimation (MLE) technique, I invent the simple equations so that differential equations are eliminated and it is much easier to estimate parameters in case that such parameters are positive integer numbers. Thus, I also propose the algorithm to find out the approximate solutions of these simple equations.
Keywords: prior probabilities, Bayesian network, maximum likelihood estimation.

Published

Sylwan Journal, Volume 160, Issue 2, February 2016, pages 281298.
ISSN: 00397660
EditorinChief: B. N. Buszewski.
Contact: Sylwan Journal, UL BITWY WARSZAWSKIEJ 1920 R NR 3, WARSZAWA, POLAND, PL02 362.

Awarded

Good evaluation from Sylwan Journal

Identifiers


Links

http://sylwan.ibles.org/syl/index.php/archive/part/160/2/2/?currentVol=160¤tissue=2

Citations

Nguyen, L. (2016, February). Specifying Prior Probabilities in Bayesian Network by Maximum Likelihood Estimation method. (B. N. Buszewski, Ed.) Sylwan Journal, 160(2), 281298.

Cited


Indexed

CABI, PSJC, Thomson ReutersSCIE

Metrics

JCR 2016: Impact factor = 0.295, ImmediacyIndex = 0.12, 5YIF = 0.251, Cited Halflife > 10.0

Categories

Mathematics

Charged

Purchase on Sylwan Journal.


Loc Nguyen, ThuHang T. Ho (2015, November 23). A proposed method for choice of sample size without predefining error
Type

Journal Article

Abstract

Sample size is very important in statistical research because it is not too small or too large. Given significant level α, the sample size is calculated based on the zvalue and predefined error. Such error is defined based on the previous experiment or other study or it can be determined subjectively by specialist, which may cause incorrect estimation. Therefore, this research proposes an objective method to estimate the sample size without predefining the error. Given an available sample X = {X_{1}, X_{2},..., X_{n}}, the error is calculated via the iterative process in which sample X is resampled many times. Moreover, after the sample size is estimated completely, it can be used to collect a new sample in order to estimate new sample size and so on.
Keywords: sample size, choice of sample size, predefined error.

Published

Journal of Data Analysis and Information Processing (JDAIP), Vol.3 No.4, November 2015, pages 163167.
ISSN online: 23277203, ISSN print: 23277211, Open Access.
EditorinChief: Feng Shi.
Publisher: Scientific Research Publishing (SCIRP).

Identifiers

DOI: 10.4236/jdaip.2015.34016

Links

http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=61363

Citations

Nguyen, L., & Ho, H. (2015, November 23). A proposed method for choice of sample size without predefining error. (F. Shi, Ed.) Journal of Data Analysis and Information Processing (JDAIP), 3(4), 163167. doi:10.4236/jdaip.2015.34016

Cited


Indexed

Academic Keys, ARDI, Blyun, CALIS, CNKI Scholar, CNPLINKER, CrossRef, EBSCO, EZB, Google Scholar, iScholar, Open Access Library, Open JGate, PBN, Research Bible, SciLit, SHERPA, WorldCat, ZDB.

Metrics

The 2year Googlebased Journal Impact Factor 2GJIF (October 2015): 1
SCIRP views for the paper (November 2015): 60
SCIRP downloads for the paper (November 2015): 43

Categories

Statistics


Loc Nguyen, MinhPhung T. Do (2015, November 13). Hudup: A Framework of Ecommercial Recommendation Algorithms
Type

Conference Presentation

Abstract

Recommendation algorithm is very important to ecommercial websites when it can provide favorite products to online customers, which results out an increase in sale revenue. I propose an infrastructure for ecommercial recommendation solutions. It is a middleware framework of ecommercial recommendation software, which supports scientists and software developers to build up their own recommendation algorithms with low cost, high achievement and fast speed. This report is a full description of the proposed framework, which begins with general architectures and then concentrates on programming classes. Finally, a tutorial will help readers to comprehend the framework.
Keywords: Recommendation Algorithm, Recommendation Server, Middleware Framework.

Presented

The 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2015). Publication date is November 13, 2015
Editors: Ana Fred, Jan Dietz, David Aveiro, Kecheng Liu, Joaquim Filipe.
Certified by: Institute for Systems and Technologies of Information, Control and Communication (INSTICC).
Publisher: SCITEPRESS.
Place and date: Lisbon, Portugal, November 12  14, 2015.

Identifiers

DOI: 10.13140/RG.2.2.27533.84969/1

Links

https://goo.gl/BQaEcm

Citations

Nguyen, L., Do, M.P. T. (2015, November 13). Hudup: A Framework of Ecommercial Recommendation Algorithms. In A. Fred, J. Dietz, D. Aveiro, K. Liu, & J. Filipe (Ed.), Final Program and Book of Abstracts of The 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2015) (p. 56). Lisbon: SCITEPRESS. Retrieved from https://goo.gl/BQaEcm

Cited


Indexed


Metrics


Categories

Computer Science


Loc Nguyen (2015, November 13). A New Approach for Collaborative Filtering based on Bayesian Network Inference
Type

Conference Paper

Abstract

Collaborative filtering (CF) is one of the most popular algorithms, for recommendation in cases, the items which are recommended to users, have been determined by relying on the outcomes done on surveying their communities. There are two main CFapproaches, which are memorybased and modelbased. The modelbased approach is more dominant by realtime response when it takes advantage of inference mechanism in recommendation task. However the problem of incomplete data is still an open research and the inference engine is being improved more and more so as to gain high accuracy and high speed. I propose a new modelbased CF based on applying Bayesian network (BN) into reference engine with assertion that BN is an optimal inference model because BN is user’s purchase pattern and Bayesian inference is evidencebased inferring mechanism which is appropriate to rating database. Because the quality of BN relies on the completion of training data, it gets low if training data have a lot of missing values. So I also suggest an average technique to fill in missing values.
Keywords: Collaborative Filtering, Bayesian Network.

Published

The 7th International Conference on Knowledge Discovery and Information Retrieval (KDIR 2015), the part of The 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2015) in conjunction with The 7th International Joint Conference on Computational Intelligence (IJCCI 2015).
Published in Proceedings of The 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management  Volume 1: KDIR (IC3K2015), pages 475480.
Editors: Ana Fred, Jan Dietz, David Aveiro, Kecheng Liu, Joaquim Filipe.
Publisher: SCITEPRESS  Science and Technology Publications, Lda.
Place and date: Lisbon Marriott Hotel, Lisbon, Portugal, November 1214, 2015.

Identifiers

ISBN: 9789897581588
DOI: 10.5220/0005635204750480

Links


Citations

Nguyen, L. (2015, November 13). A New Approach for Collaborative Filtering based on Bayesian Network Inference. The 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management. 1: KDIR (IC3K2015), pp. 475480. Lisbon, Portugal: SCITEPRESS  Science and Technology Publications, Lda. doi:10.5220/0005635204750480

Cited


Indexed

DPLB, ELSEVIER, IET Inspec, SCITEPRESS Digital Library, Scopus, Thomson Reuters  CPCI

Metrics


Categories

Computer Science


Loc Nguyen (2015, October 9). An Advanced Approach of Local Counter Synchronization in Timestamp Ordering Algorithm in Distributed Concurrency Control
Type

Journal Article

Abstract

Concurrency control is the problem that database management system (DBMS) meets with difficulties, especially distributed DBMS. There are two main methods of concurrency control such as lockingbased and timestampbased. Each method gets involved in its own disadvantages, but lockingbased approach is often realized in most distributed DBMS because its feasibility and strictness lessen danger in distributed environment. Otherwise, timestamp ordering algorithm is merely implemented in central DBMS due to the issue of local counter synchronization among sites in distributed environment. The common solution is broadcasting the message about the change of local counter (at one site) over distributed network so that all remaining sites “know” to update their own counters. However, this solution raises some disadvantages of low performance. So, an advanced approach is proposed to overcome such disadvantages by giving out another measure socalled active number that is responsible for harmonizing local counters among distributed sites. Moreover, another method is proposed to apply minimum spanning tree into reduce cost of broadcasting messages over distributed network.
Keywords: distributed concurrency control, timestamp ordering algorithm, local counter synchronization.

Published

Open Access Library Journal (OALib Journal), volume 2, issue 10, pages 15.
ISSN online: 23339705, ISSN print: 23339721, Open Access.
Editor: Nigel John.
Publisher: Open Access Library Inc.

Identifiers

DOI: 10.4236/oalib.1100982

Links

http://www.oalib.com/articles/3113738

Citations

Nguyen, L. (2015, October 9). An Advanced Approach of Local Counter Synchronization in Timestamp Ordering Algorithm in Distributed Concurrency Control. (N. John, Ed.) Open Access Library Journal (OALib Journal), 2(10), 15. doi:10.4236/oalib.1100982

Cited

Google Scholar Cited by (November, 2017): 3, 2, 1.

Indexed

Academic Keys, CiteFactor, CNPLINKER, CrossRef, Directory of Science, DTU Findit, EBSCO, EZB, Open JGate, Open Access Library, Research Bible, SHERPA, WorldCat, ZDB

Metrics


Categories

Computer Science


Loc Nguyen (2015, September 28). Introduction to A Framework of Ecommercial Recommendation Algorithms
Type

Journal Article

Abstract

Recommendation algorithm is very important for ecommercial websites when it can recommend online customers favorite products, which results out an increase in sale revenue. I propose the framework of ecommercial recommendation algorithms. This is a middleware framework or “operating system” for ecommercial recommendation software, which support scientists and software developers build up their own recommendation algorithms based on this framework with low cost, high achievement and fast speed.
Keywords: Recommendation Algorithm, Recommendation Server, Middleware Framework.

Published

American Journal of Computer Science and Information Engineering (AJCSIE), Vol. 2, No. 4, September 2015, pages 3344.
ISSN online: 23811129, ISSN print: 23811110, Open Access.
Editors: Abd el rahman Shabayek and more.
Publisher: American Association for Science and Technology (AASCIT).

Presented

European Project Space, a part of The 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2015).
Place and date: Lisbon Marriott Hotel, Lisbon, Portugal, November 1214, 2015.

Identifiers


Links

http://www.aascit.org/journal/archive2?journalId=912&paperId=1894

Citations

Nguyen, L. (2015, September 28). Introduction to A Framework of Ecommercial Recommendation Algorithms. (A. e. Shabayek et al., Eds.) American Journal of Computer Science and Information Engineering (AJCSIE), 2(4), 3344. Retrieved from http://www.aascit.org/journal/archive2?journalId=912&paperId=1894.

Cited

Google Scholar Cited by (January, 2017): 1.

Indexed


Metrics


Categories

Computer Science


Loc Nguyen (2015, January 22). A User Modeling System for Adaptive Learning
Type

Conference Paper

Abstract

Adaptive learning is a research branch of elearning, which supports personalization in study. Thus, adaptive learning system has ability to change its action to provide learning content and pedagogic environment/method for every student in accordance with her/his individual characteristics such as knowledge, learning styles, interests, etc. Student's characteristics are structured as socalled user model and the socalled user modeling system is responsible for managing such user model. User modeling system is the heart of adaptive learning. This research proposes a novel user modeling system named Zebra which manipulates students' characteristics in the effective way. Moreover, Zebra provides powerful inference mechanism for reasoning out new information about students in order to support adaptive learning in the best way. Zebra is implemented as computer software and so, the purpose of this paper is to introduce Zebra  a novel and powerful user modeling system.
Keywords: user modeling system, user model, adaptive learning.

Published

Proceedings of The 2014 International Conference on Interactive Collaborative Learning (ICL 2014), pages 864866. Publication date is January 22, 2015.
INSPEC Accession Number: 14869394, ISBN online: 9781479944378.
Editors: Sebastian Schreiter.
Publisher: IEEE
Place and date: The 2014 World Engineering Education Forum (WEEF2014), Dubai, UAE, 36 December, 2014.

Published

Standard Scientific Research and Essays (SSRE), Volume 2, Issue 4, April 2014, pages 65209.
ISSN online: 23107502, ISSN print: 23107502, Open Access.
Editor: Mohammad Asaduzzaman Chowdhury.
Publisher: Standard Research Journals.

Presented

The research is presented at The 2014 World Engineering Education Forum (WEEF2014), Dubai, UAE, 36 December 2014.
The research is presented secondly at Athabasca University, Canada, October 2013.
The research is also presented at Doctoral Consortium of The 3rd International Conference on Theories and Applications of Computer Science (ICTACS2010), Can Tho University, Vietnam, September 26, 2010 and Doctoral Consortium of The 4th Conference on Information Technology and Telecommunications (ICTFIT2012), Ho Chi Minh University of Science, Vietnam.
The research is presented firstly at Athabasca University, Canada, November 2009.

Awarded

Excellent paper by Standard Research Journals

Identifiers

DOI: 10.1109/ICL.2014.7017887

Links

http://ieeexplore.ieee.org/document/7017887
http://standresjournals.org/journals/SSRE/Abstract/2014/april/Loc.html

Citations

Nguyen, L. (2015, January 22). A User Modeling System for Adaptive Learning. In S. Schreiter (Ed.), The 2014 International Conference on Interactive Collaborative Learning (ICL 2014) (pp. 864866). The 2014 World Engineering Education Forum (WEEF2014), Dubai, UAE: IEEE. doi:10.1109/ICL.2014.7017887
Nguyen, L. (2014, April). A User Modeling System for Adaptive Learning. University of Science, Ho Chi Minh city, Vietnam. Abuja, Nigeria: Standard Research Journals. Retrieved from http://standresjournals.org/journals/SSRE/Abstract/2014/april/Loc.html

Cited

Google Scholar Cited by (October, 2018): 3, 2, 1.

Indexed

Academic Search Alumni Edition, Academic Search Complete, AGORA, ASFA, ASI, CiteFactor, CNKI Scholar, DOAJ, DRJI, EBSCO, EFITA, Expended Academy Index, General Impact Factor, GEOBASE, GeoRef, Google Scholar, InnoSpace, International Scientific Indexing, Journal Informatics, OAJI, Open JGate, Meteorological & Geoastrophysical Abstracts, OARE, ProQuest, Research Bible, TU Berlin, Universal Impact Factor, WZB

Metrics

Universal Impact Factor (October 2015): 1.2363

Categories

Computer Science, Education

Charged

Purchase on IEEE Digital Library.


Loc Nguyen, ThuHang T. Ho (2015, January 12). A fast computational formula for Kappa coefficient
Type

Journal Article

Abstract

Kappa coefficient is very important in clinical research when there is a requirement of interagreement among physicians who measure clinical data. It is too complicated to calculate traditional Kappa formula in huge data because of many arithmetic operations for determining probability of observed agreement and probability of chance agreement. Therefore, this research proposes a fast computational formula for Kappa coefficient based on comments about probability of observed agreement and probability of chance agreement. These comments lead to the method to save time cost when calculating Kappa coefficient and to reduce the number of arithmetic operations at least. Finally, such fast formula is applied into the gestational data measured in real world so as to evaluate its strong point.
Keywords: Kappa Coefficient, Fast Computational Formula.

Published

Science Journal of Clinical Medicine (SJCM), Vol. 4, No. 1, January 2015, pages 13.
ISSN online: 23272732, ISSN print: 23272724, Open Access.
Editors: Arzu Genc, ShaoAn Xue, Richard Rison, Vania Rocha.
Publisher: Science Publishing Group.

Awarded

Certified by Science Publishing Group

Identifiers

DOI: 10.11648/j.sjcm.20150401.11

Links

http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=159&doi=10.11648/j.sjcm.20150401.11

Citations

Nguyen, L., & Ho, H. (2015, January 12). A fast computational formula for Kappa coefficient. (A. Genc, S.A. Xue, R. Rison, & V. Rocha, Eds.) Science Journal of Clinical Medicine (SJCM), 4 (1), 13. doi:10.11648/j.sjcm.20150401.11

Cited

ResearchGate Cited by (April, 2017): 1.

Indexed

Academic Keys, ARDI, CNKI, CrossRef, DRJI, EZB, Journal Seek, PBN, Research Bible, WorldCat, WZB, ZDB

Metrics

SciencePG views for the article (October 2015): 214
SciencePG downloads for the article (October 2015): 132

Categories

Statistics, Medicine


Loc Nguyen (2015, January 10). Feasible length of Taylor polynomial on given interval and application to find the number of roots of equation
Type

Journal Article

Abstract

It is very necessary to represent arbitrary function as a polynomial in many situations because polynomial has many valuable properties. Fortunately, any analytic function can be approximated by Taylor polynomial. The higher the degree of Taylor polynomial is, the better the approximation is gained. There is problem that how to achieve optimal approximation with restriction that the degree is not so high because of computation cost. This research proposes a method to estimate feasible degree of Taylor polynomial so that it is likely that Taylor polynomial with degree being equal to or larger than such feasible degree is good approximation of a function in given interval. The feasible degree is called the feasible length of Taylor polynomial. The research also introduces an application that combines Sturm theorem and the method to approximate a function by Taylor polynomial with feasible length in order to count the number of roots of equation in given interval.
Keywords: Taylor polynomial, roots of equation, analytic function approximation, feasible length.

Published

International Journal of Mathematical Analysis and Applications, Vol. 1, No. 5, December 2014, pages 8083.
ISSN: 23753927.
Editors: Abbas Moustafa and more.
Publisher: American Association for Science and Technology (AASCIT).

Identifiers


Links

http://www.aascit.org/journal/archive2?journalId=921&paperId=1017

Citations

Nguyen, L. (2015, January 10). Feasible length of Taylor polynomial on given interval and application to find the number of roots of equation. (A. Moustafa et al., Eds.) International Journal of Mathematical Analysis and Applications, 1 (5), 8083. Retrieved from http://www.aascit.org/journal/archive2?journalId=921&paperId=1017

Cited


Indexed

Academic Keys, DRJI, Research Bible

Metrics

AASCIT views for the article (October 2015): 215
AASCIT downloads for the article (October 2015): 54

Categories

Mathematics


Loc Nguyen (2014, December 6). Evaluating Adaptive Learning Model
Type

Conference Paper

Abstract

Distance learning or elearning is a trend of modern education, which brings new chance of study to everyone. Thus, everyone can study at anywhere and anytime so that they can improve and update their knowledge in lifelong time. Adaptive learning is a research branch of elearning, which give adaptation and personalization to users in learning context. Different people receive different learning materials / teaching methods in accordance with their individual information / characteristics such as knowledge, goal, experience, interest, background, etc. Such individual information is structured in a format socalled user model. User model is the heart of adaptive learning system. User model is managed by user modeling system. There are many theories and practical methods to build up user model and adaptive learning system and each method has particular aspects but it is very difficult to determine which method or system is good because there is no evaluation standard and each method has particular strong point and drawbacks. Therefore, the goal of this research is to propose criterions to evaluate adaptive learning system and user modeling system. Moreover, research gives an evaluation scenario considered as an example for applying proposed criterion into evaluating adaptive learning system and user modeling in learning context.
Keywords: adaptive learning, user modeling, evaluation.

Published

The 17th International Conference on Interactive Computer aided Learning (ICL2014), which is a part of The 2014 World Engineering Education Forum (WEEF2014), Engineering Education For A Global Community.
Published in The 2014 International Conference on Interactive Collaborative Learning (ICL 2014), pages 818822.
Publisher: IEEE
Place and date: Dubai International Convention and Exhibition Centre, Dubai, UAE, December 36, 2014.

Identifiers

DOI: 10.1109/ICL.2014.7017878
INSPEC Accession Number: 14869356

Links

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7017878

Citations

Nguyen, L. (2014, December 6). Evaluating Adaptive Learning Model. The 2014 International Conference on Interactive Collaborative Learning (ICL 2014) (pp. 818822). Dubai, UAE: IEEE. doi:10.1109/ICL.2014.7017878

Cited


Indexed

IEEE Digital Library

Metrics

IEEE Digital Library usage (October 2015): 51

Categories

Computer Science, Education, Statistics

Charged

Purchase on IEEE Digital Library.


Loc Nguyen (2014, October 21). Improving analytic function approximation by minimizing square error of Taylor polynomial
Type

Journal Article

Abstract

It is very necessary to represent arbitrary function as a polynomial in many situations because polynomial has many valuable properties. Fortunately, any analytic function can be approximated by Taylor polynomial. The quality of Taylor approximation within given interval is dependent on degree of Taylor polynomial and the width of such interval. Taylor polynomial gains highly precise approximation at the point where the polynomial is expanded and so, the farer from such point it is, the worse the approximation is. Given two successive Taylor polynomials which are approximations of the same analytic function in given interval, this research proposes a method to improve the later one by minimizing their deviation socalled square error. Based on such method, the research also propose a socalled shifting algorithm which results out optimal approximated Taylor polynomial in given interval by dividing such interval into subintervals and shifting along with sequence of these subintervals in order to improve Taylor polynomials in successive process, based on minimizing square error.
Keywords: Taylor polynomial, analytic function approximation, square error.

Published

International Journal of Mathematical Analysis and Applications, Vol. 1, No. 4, October 2014, pages 6367.
ISSN: 23753927
Editors: Abbas Moustafa and more.
Publisher: American Association for Science and Technology (AASCIT).

Identifiers


Links

http://www.aascit.org/journal/archive2?journalId=921&paperId=1016

Citations

Nguyen, L. (2014, October 21). Improving analytic function approximation by minimizing square error of Taylor polynomial. (A. Moustafa et al., Eds.) International Journal of Mathematical Analysis and Applications, 1 (4), 6367. Retrieved from http://www.aascit.org/journal/archive2?journalId=921&paperId=1016

Cited


Indexed

Academic Keys, DRJI, Research Bible

Metrics

AASCIT views for the article (October 2015): 184
AASCIT downloads for the article (October 2015): 53

Categories

Mathematics


Loc Nguyen (2014, September). Theorem of logarithm expectation and its application to prove sample correlation coefficient as unbiased estimate
Type

Journal Article

Abstract

In statistical theory, a statistic that is function of sample observations is used to estimate distribution parameter. This statistic is called unbiased estimate if its expectation is equal to theoretical parameter. Proving whether or not a statistic is unbiased estimate is very important but this proof may require a lot of efforts when statistic is complicated function. Therefore, this research facilitates this proof by proposing a theorem which states that the expectation of variable x > 0 is μ if and only if the limit of logarithm expectation of x approaches logarithm of μ. In order to make clear of this theorem, the research gives an example of proving correlation coefficient as unbiased estimate by taking advantages of this theorem.
Keywords: logarithm expectation, correlation coefficient, unbiased estimate.

Published

Journal of Mathematics and System Science (JMSS), Volume 4, Number 9, September 2014, pages 605608.
ISSN online: 21595305, ISSN print: 21595291, Open Access.
Editors: Assia GuezaneLakoud, William P. Fox, Elisa Francomano, Sergo A. Episkoposian, Elizbar Nadaraya, Alexander Nikolaevich Raikov, Baha ŞEN, Claudio Cuevas, Wattanavadee Sriwattanapongse, Mohammad Mehdi Rashidi.
Publisher: David Publishing Company.

Identifiers

DOI: 10.17265/21595291/2014.09.003

Links

http://www.davidpublisher.org/index.php/Home/Article/index?id=505.html

Citations

Nguyen, L. (2014, September). Theorem of logarithm expectation and its application to prove sample correlation coefficient as unbiased estimate. (A. GuezaneLakoud, W. P. Fox, E. Francomano, S. A. Episkoposian, E. Nadaraya, A. N. Raikov, et al., Eds.) Journal of Mathematics and System Science (JMSS), 4 (9), 605608. doi:10.17265/21595291/2014.09.003

Cited


Indexed

Academic Keys, BASE, CEPS, CQVIP, CSTJ, CiteFactor, CSA, DBH, EBSCO, EZB, getCITED, Google Scholar, Index Copernicus, InfoBase Index, InnoSpace, NSD, OCLC, Open JGate, PAIS, PBN, ProQuest, Scholar Steer, SHERPA, SIS, SJournal Index, Summon Serials Solutions, Turkish Education Index, UCSD, UDL, UlrichsWeb, WZB

Metrics


Categories

Statistics, Mathematics


Loc Nguyen (2014, May). A New Algorithm for Modeling and Inferring User’s Knowledge by Using Dynamic Bayesian Network
Type

Journal Article

Abstract

Dynamic Bayesian network (DBN) is more robust than normal Bayesian network (BN) for modeling users’ knowledge when it allows monitoring user’s process of gaining knowledge and evaluating her/his knowledge. However the size of DBN becomes numerous when the process continues for a long time; thus, performing probabilistic inference will be inefficient. Moreover the number of transition dependencies among points in time is too large to compute posterior marginal probabilities when doing inference in DBN.
To overcome these difficulties, I propose the new algorithm that both the size of DBN and the number of Conditional Probability Tables (CPT) in DBN are kept intact (not changed) when the process continues for a long time. This method includes six steps: initializing DBN, specifying transition weights, reconstructing DBN, normalizing weights of dependencies, redefining CPT(s) and probabilistic inference. Our algorithm also solves the problem of temporary slip and lucky guess: “learner does (doesn’t) know a particular subject but there is solid evidence convincing that she/he doesn’t (does) understand it; this evidence just reflects a temporary slip (or lucky guess)”.
Keywords: Dynamic Bayesian Network.

Published

Statistics Research Letters (SRL), Volume 3 Issue 2, May 2014.
ISSN online: 23257059, ISSN print: 23257040, Open Access.
Editor: Mohammad Z. Raqab.
Publisher: Science and Engineering Publishing Company.

Identifiers


Links

http://www.srljournal.org/paperInfo.aspx?ID=6933
http://www.srljournal.org/Download.aspx?ID=6933

Citations

Nguyen, L. (2014, May). A New Algorithm for Modeling and Inferring User’s Knowledge by Using Dynamic Bayesian Network. (M. Z. Raqab, Ed.) Statistics Research Letters (SRL), 3 (2). Retrieved from http://www.srljournal.org/paperInfo.aspx?ID=6933

Cited


Indexed

Academia.edu, Academic Keys, Aol., CiteFactor, Cloud Database, CNKI, CNPLINKER, CrossRef, DataHub, DeepDyve, dogpile, DRJI, EZB, getCITED, GitHub, Internet Archive, JournalTOCs, JGate, PubZone, Research Bible, Rice St, Scribd, SIS, UlrichsWeb, Universal Impact Factor, WIPO, WorldCat, Yandex

Metrics

SRL downloads for the article (October 2015): 253

Categories

Computer Science, Mathematics


Loc Nguyen (2014, May 28). User Model Clustering
Type

Journal Article

Abstract

User model which is the representation of information about user is the heart of adaptive systems. It helps adaptive systems to perform adaptation tasks. There are two kinds of adaptations:
 Individual adaptation regards to each user
 Group adaptation focuses on group of users
The basic problem needs solving so as to support group adaptation is how to create user groups. This relates to clustering techniques so as to cluster user models because a group is considered as a cluster of similar user models. In this paper I discuss two clustering algorithms: kmeans and kmedoids and also propose dissimilarity measures and similarity measures which are applied into different structures (forms) of user models like vector, overlay, and Bayesian network.
Keywords: User Model, Cluster.

Published

Journal of Data Analysis and Information Processing (JDAIP), Vol. 2, No 2, May 2014, pages 4148.
ISSN online: 23277203, ISSN print: 23277211, Open Access.
EditorinChief: Feng Shi.
Publisher: Scientific Research Publishing (SCIRP).

Identifiers

DOI: 10.4236/jdaip.2014.22006

Links

http://www.scirp.org/journal/PaperInformation.aspx?PaperID=46376
http://www.scirp.org/journal/PaperDownload.aspx?paperID=46376

Citations

Nguyen, L. (2014, May 28). User Model Clustering. (F. Shi, Ed.) Journal of Data Analysis and Information Processing (JDAIP), 2 (2), 4148. doi:10.4236/jdaip.2014.22006

Cited


Indexed

Academic Keys, ARDI, Blyun, CALIS, CNKI Scholar, CNPLINKER, CrossRef, EBSCO, EZB, Google Scholar, iScholar, Open Access Library, Open JGate, PBN, Research Bible, SciLit, SHERPA, WorldCat, ZDB.

Cited

Google Scholar Cited by (January, 2018): 4, 3, 2, 1

Metrics

SCIRP views for the paper (October 2015): 1646
SCIRP downloads for the paper (October 2015): 856
The 2year Googlebased Journal Impact Factor 2GJIF (October 2015): 1

Categories

Computer Science


Loc Nguyen, ThuHang T. Ho (2014, March 30). A framework of fetal age and weight estimation
Type

Journal Article

Abstract

Fetal age and weight estimation plays the important role in pregnant treatments. There are many estimate formulas created by the combination of statistics and obstetrics. However, such formulas give optimal estimation if and only if they are applied into specified community or ethnic group with characteristics of such ethnic group. This paper proposes a framework that supports scientists to discover and create new formulas more appropriate to community or region where scientists do their research. The discovery algorithm used inside the framework is the core of the architecture of framework. This algorithm is based on heuristic assumptions, which aims to produce good estimate formula as fast as possible. Moreover, the framework gives facilities to scientists for exploiting useful information under pregnant statistical data.
Keywords: fetal age estimation, fetal weight estimation, regression model, estimate formula, estimate framework.

Published

Journal of Gynecology and Obstetrics (JGO), Vol. 2, No. 2, 2014, pages 2025.
ISSN online: 23767820, ISSN print: 23767812, Open Access.
Editors: B Suresh Kumar Shetty, Jose Morales, amany badawy, Chishimba Mowa, Kamla Kant Shukla, Tiane Chen, Sibel Cevizci, Georgios Androutsopoulos.
Publisher: Science Publishing Group.

Abstracted

The 4th International Conference on Medical Informatics and Telehealth.
DOI: 10.4172/21577420.C1.012
Publisher: OMICS International.
Place and date: London, UK, October 67, 2016.
Website: http://medicalinformatics.conferenceseries.com/2016

Presented

The research is represented and awarded at Ho Chi Minh City Society for Reproductive Medicine (HOSREM).
Chief Examiner: Prof. Nguyen, NgocPhuong T.
Place and date: Equatorial Hotel, Ho Chi Minh city, Vietnam, November 26, 2016.

Awarded

Certified by Science Publishing Group

Identifiers

DOI: 10.11648/j.jgo.20140202.13

Links

http://www.sciencepublishinggroup.com/journal/paperinfo.aspx?journalid=255&doi=10.11648/j.jgo.20140202.13

Citations

Nguyen, L., & Ho, H. (2014, March 30). A framework of fetal age and weight estimation. (B. S. Shetty, J. Morales, a. badawy, C. Mowa, K. K. Shukla, T. Chen, et al., Eds.) Journal of Gynecology and Obstetrics (JGO), 2 (2), 2025. doi:10.11648/j.jgo.20140202.13

Cited


Indexed

Academic Keys, CNKI Scholar, CrossRef, DRJI, EZB, PBN, Research Bible, WorldCat, WZB, ZDB

Metrics

SciencePG views for the paper (October 2015): 318
SciencePG downloads for the paper (October 2015): 49

Categories

Statistics, Medicine, Computer Science


Loc Nguyen (2013). A New Approach for Modeling and Discovering Learning Styles by Using Hidden Markov Model
Type

Journal Article

Abstract

Adaptive learning systems are developed rapidly in recent years and the “heart” of such systems is user model. User model is the representation of information about an individual that is essential for an adaptive system to provide the adaptation effect, i.e., to behave differently for different users. There are some main features in user model such as: knowledge, goals, learning styles, interests, background... but knowledge, learning styles and goals are features attracting researchers’ attention in adaptive elearning domain. Learning styles were surveyed in psychological theories but it is slightly difficult to model them in the domain of computer science because learning styles are too unobvious to represent them and there is no solid inference mechanism for discovering users’ learning styles now. Moreover, researchers in domain of computer science will get confused by so many psychological theories about learning style when choosing which theory is appropriate to adaptive system.
In this paper I give the overview of learning styles for answering the question “what are learning styles?” and then propose the new approach to model and discover students’ learning styles by using Hidden Markov model (HMM). HMM is such a powerful statistical tool that it allows us to predict users’ learning styles from observed evidences about them.

Published

Global Journal of Human Social Science: G  Linguistics & Education, Volume 13, Issue 4 Version 1.0 Year 2013, pages 110.
Type: Double Blind Peer Reviewed International Research Journal.
ISSN online: 2249460X, ISSN print: 0975587X. Open Access.
Editor: George Perry and more.
Publisher: Global Journals Inc. (U.S.).

Awarded

Certified by Global Journal of Human Social Science

Identifiers

FOR Code: 130304

Links

http://socialscienceresearch.org/index.php/GJHSS/article/view/609

Citations

Nguyen, L. (2013). A New Approach for Modeling and Discovering Learning Styles by Using Hidden Markov Model. (G. Perry et al., Eds.) Global Journal of Human Social Science: G  Linguistics & Education, 13 (4 Version 1.0 Year 2013), 110. Retrieved from http://socialscienceresearch.org/index.php/GJHSS/article/view/609

Cited

Google Scholar Cited by (November, 2016): 3, 2, 1

Indexed

ABCentral, BASE, Cabell’s Directories, Docstoc, Google Scholar, Journal Seek, Open JGate, Pennsylvania Digital Library, ProQuest, Scribd,
UlrichsWeb

Metrics

Google Scholar citation (2014): h5index = 4, h5median = 6

Categories

Computer Science, Education, Psychology, Mathematics


Loc Nguyen (2013, July 15). Overview of Bayesian Network
Type

Report

Abstract

Bayesian network is applied widely in machine learning, data mining, diagnosis, etc; it has a solid evidencebased inference which is familiar to human intuition. However Bayesian network causes a little confusion because there are many complicated concepts, formulas and diagrams relating to it. Such concepts should be organized and presented in clear manner so as to be easy to understand it. This is the goal of this report.
This report includes 4 main parts that cover principles of Bayesian network:
 Part 1: Introduction to Bayesian network giving some basic concepts.
 Part 2: Bayesian network inference discussing inference mechanism inside Bayesian network.
 Part 3: Parameter learning tells us how to update parameters of Bayesian network.
 Part 4: Structure learning surveys some main techniques to build up Bayesian network.
Keywords: Bayesian network, parameter learning, structure learning.

Published

Science Journal Of Mathematics and Statistics (SJMS), Volume 2013, July 15, 2013.
ISSN: 22766324, Open Access.
Publisher: Science Journal Publication.

Awarded

Certified by Science Journal Publication

Identifiers

Article ID: sjms105
DOI: 10.7237/sjms/105

Links

http://www.sjpub.org/sjms/abstract/sjms105.html

Citations

Nguyen, L. (2013, July 15). Overview of Bayesian Network. University of Technology, Ho Chi Minh city, Vietnam. Warri, Delta State, Nigeria: Science Journal Publication. doi:10.7237/sjms/105

Cited


Indexed

CrossRef, Gale, Google Scholar, Summon Serials Solutions, UlrichsWeb

Metrics


Categories

Mathematics


Loc Nguyen (2013, June). The Bayesian approach and suggested stopping criterion in Computerized Adaptive Testing
Type

Journal Article

Abstract

Computerbased tests have more advantages than the traditional paperbased tests when there is a boom of internet and computer. Computerbased testing allows examinees to perform tests at any time and any place and testing environment becomes more realistic. Moreover, it is very easy to assess examinees’ ability by using computerized adaptive testing (CAT). CAT is considered as a branch of computerbased testing but it improves the accuracy of test core when CAT systems try to choose items (tests, exams, questions, etc.) which are suitable to examinees’ abilities; such items are called adaptive items.
The important problem in CAT is how to estimate examinees’ abilities so as to select the best items for examinees. There are some methods to solve this problem such as maximization likelihood estimation but I apply the Bayesian method into computing ability estimates. In this paper, I suggest a stopping criterion for CAT algorithm: the process of testing ends only when examinee’s knowledge becomes saturated (she/he can’t do test better or worse) and such knowledge is her/his actual knowledge.
Keywords: Bayesian inference, computerized adaptive test.

Published

International Journal of Research in Engineering and Technology (IJRET), vol. 2, No. 1, 2013, pages 3638.
ISSN: 22774378
Editor: Ahmad T. AlTaani.
Publisher: Planetary Scientific Research Center.
International Conference on Computational and Information Sciences (ICCIS’2013). Phuket, Thailand, June 2324, 2013.

Identifiers


Links

https://goo.gl/kXxczD

Citations

Nguyen, L. (2013, June). The Bayesian approach and suggested stopping criterion in Computerized Adaptive Testing. (A. T. AlTaani, Ed.) International Journal of Research in Engineering and Technology (IJRET), 2 (12103), 3638. Retrieved from https://goo.gl/kXxczD

Cited


Indexed

Ask.com, Bing, Google Scholar, Yahoo, Yandex

Metrics

Google Scholar citation on ICCIS (2014): h5index = 11, h5median = 15

Categories

Computer Science, Mathematics, Education


Loc Nguyen (2013, June 5). A new method to determine separated hyperplane for nonparametric sign test in multivariate data
Type

Conference Presentation

Abstract

Nonparametric testing is very necessary in case that the statistical sample does not conform normal distribution or we have no knowledge about sample distribution. Sign test is a popular and effective test for nonparametric model but it cannot be applied into multivariate data in which observations are vectors because the ordering and comparative operators are not defined in ndimension vector space. So, this research proposes a new approach to perform sign test on multivariate sample by using a hyperplane to separate multidimensional observations into two sides. Therefore, it is possible for the sign test to assign plus signs and minus signs to observations in each side. Moreover, this research introduces a new method to determine the separated hyperplane. This method is a variant of support vector machine (SVM), thus, the optimized hyperplane is the one that contains null hypothesis and splits observations as discriminatively as possible.
Keywords: separated hyperplane, nonparametric sign test.

Presented

STATISTICS and its INTERACTIONS with OTHER DISCIPLINES (SIOD 2013).
Editors: VinhDanh Le, Audri Mukhopadhyay, GiaThu Pham.
Publisher: Ton Duc Thang University.
Place and date: Ton Duc Thang University, Ho Chi Minh, Vietnam, June 4  7, 2013.

Identifiers

DOI: 10.13140/RG.2.2.20886.86080/1

Links

https://goo.gl/NGQ8jq

Citations

Nguyen, L. (2013, June 5). A new method to determine separated hyperplane for nonparametric sign test in multivariate data. In V.D. Le, A. Mukhopadhyay, & G.T. Pham (Ed.), STATISTICS and its INTERACTIONS with OTHER DISCIPLINES (SIOD 2013). Ho Chi Minh: Ton Duc Thang University. doi:10.13140/RG.2.2.20886.86080/1

Cited


Indexed


Metrics


Categories

Statistics, Mathematics


Loc Nguyen, MinhPhung T. Do, ThanhNguyen Vu, NamDung Tran (2013, March 20). A New Approach for Collaborative Filtering Based on Mining Frequent Itemsets
Type

Conference Paper

Abstract

As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first propose a new CF modelbased approach which has been implemented by basing on mining frequent itemsets technique with the assumption that “The larger the support of an item is, the higher it's likely that this item will occur in some frequent itemset, is”. We then present the enhanced techniques such as the followings: bits representations, bits matching as well bits mining in order to speedingup the algorithm processing with CF method.
Keywords: collaborative filtering, mining frequent itemsets, bit matching, bit mining.

Published

ACIIDS'13 Proceedings of the 5th Asian conference on Intelligent Information and Database Systems  Volume Part II, Session: Tools and Applications, pages 1929. Series Title: Lecture Notes in Computer Science.
ISBN print: 9783642365423, ISBN online: 9783642365430, Series ISSN: 03029743.
Editors: Ali Selama, Ngoc Thanh Nguyen, Habibollah Haron.
Publisher: Springer Berlin Heidelberg.
Place and date: Kuala Lumpur, Malaysia, March 1820, 2013.

Full Version Accepted

International Journal of Applied Mathematics and Machine Learning. Acceptance date: July 15, 2016
ISSN: 23942258
Editors: Li Li, Shuaiqi Liu, Mehmet Koc, José Luis LópezBonilla, Balan Sethuramalingam, Bin Guo, Loc Nguyen, Hind Rustum Mohammed Shaaban, Srinivas Nowduri.
Publisher: Scientific Advances Publishers.

Identifiers

DOI: 10.1007/9783642365430_3

Links

http://link.springer.com/chapter/10.1007/9783642365430_3

Citations

Nguyen, L., Do, M.P. T., Vu, N. T., & Tran, D. N. (2013, March 20). A New Approach for Collaborative Filtering Based on Mining Frequent Itemsets. In Selamat, A., Nguyen, Ngoc T., & Haron, H. (Ed.), ACIIDS'13 Proceedings of the 5th Asian conference on Intelligent Information and Database Systems. II, pp. 1929. Kuala Lumpur: Springer. doi:10.1007/9783642365430_3

Cited


Indexed

ACM Digital Library, BibSonomy, DBLP, Google Scholar, Research Gate, SpringerLink

Metrics

Springer downloads (2017, January): 1114

Categories

Computer Science

Charged

Purchase on Springer.


Loc Nguyen (2011). The method of seven qigong exercises of archery simulation

MinhPhung T. Do, Dung V. Nguyen, Loc Nguyen (2010, August 20). Modelbased Approach for Collaborative Filtering
Type

Conference Paper

Abstract

Collaborative filtering (CF) is popular algorithm for recommender systems. Therefore items which are recommended to users are determined by surveying their communities. CF has good perspective because it can cast off limitation of recommendation by discovering more potential items hidden under communities. Such items are likely to be suitable to users and they should be recommended to users. There are two main approaches for CF: memorybased and modelbased. Memorybased algorithm loads entire database into system memory and make prediction for recommendation based on such inline memory database. It is simple but encounters the problem of huge data. Modelbased algorithm tries to compress huge database into a model and performs recommendation task by applying reference mechanism into this model. Modelbased CF can response user’s request instantly. This paper surveys common techniques for implementing modelbased algorithms. We also give a new idea for modelbased approach so as to gain high accuracy and solve the problem of sparse matrix by applying evidencebased inference techniques.
Keywords: collaborative filtering, memorybased approach, modelbased approach, expectation maximization, Bayesian network.

Published

Proceedings of The 6th International Conference on Information Technology for Education (IT@EDU2010), pages 217225.
Publisher: Ho Chi Minh University of Information Technology.
Place and date: Ho Chi Minh city and Phan Thiet, Viet Nam. August 18  20, 2010.

Identifiers


Links

https://goo.gl/BHu7ge

Citations

Do, M.P. T., Nguyen, D. V., & Nguyen, L. (2010, August 20). Modelbased Approach for Collaborative Filtering. Proceedings of The 6th International Conference on Information Technology for Education (IT@EDU2010) (pp. 217225). Ho Chi Minh, Vietnam: Ho Chi Minh University of Information Technology. Retrieved from https://goo.gl/BHu7ge

Cited


Indexed


Metrics


Categories

Computer Science


Loc Nguyen (2010). Discovering User Interests by Document Classification
Type

Book Chapter

Abstract

User interest is one of personal traits attracting researchers’ attention in user modeling and user profiling. User interest competes with user knowledge to become the most important characteristics in user model. Adaptive systems need to know user interests so that provide adaptation to user. For example, adaptive learning systems tailor learning materials (lesson, example, exercise, test...) to user interests. I propose a new approach for discovering user interest based on document classification. The basic idea is to consider user interests as classes of documents. The process of classifying documents is also the process of discovering user interests. There are two new points of view:
 The series of user access in his/her history are modeled as documents. So user is referred indirectly to as “document”.
 User interests are classes such documents are belong to.
Our approach includes four following steps:
 Documents in training corpus are represented according to vector model. Each element of vector is product of term frequency and inverse document frequency. However the inverse document frequency can be removed from each element for convenience.
 Classifying training corpus by applying decision tree or support vector machine or neural network. Classification rules (weight vectors W^{*}) are drawn from decision tree (support vector machine). They are used as classifiers.
 Mining user’s access history to find maximum frequent itemsets. Each itemset is considered a interesting document and its member items are considered as terms. Such interesting documents are modeled as vectors.
 Applying classifiers (see step 3) into these interesting documents in order to choose which classes are most suitable to these interesting documents. Such classes are user interests.
This approach bases on document classification but it also relates to information retrieval in the manner of representing documents. Hence section 1 discusses about vector model for representing documents. Support vector machine, decision tree and neural network on document classification are mentioned in section 2, 3, 4. Main technique to discover user interest is described in section 5. Section 6 is the evaluation.

Published

In IHsien Ting, HuiJu Wu, and TienHwa Ho (Eds.), Mining and Analyzing Social Networks (Vol. 288 In series “Studies in Computational Intelligence”), chapter 9, pages 139159.
ISBN online: 9783642134227, ISBN print: 9783642134210, Series ISSN: 1860949X, DOI: 10.1007/9783642134227
Editors: IHsien Ting, HuiJu Wu, TienHwa Ho.
Publisher: Springer.

Identifiers

DOI: 10.1007/9783642134227_9

Links

http://link.springer.com/chapter/10.1007/9783642134227_9

Citations

Nguyen, L. (2010). Discovering User Interests by Document Classification. In I.H. Ting, H.J. Wu, T.H. Ho, I.H. Ting, H.J. Wu, & T.H. Ho (Eds.), Mining and Analyzing Social Networks (Vol. 288 In series “Studies in Computational Intelligence”, pp. 139159). Springer Berlin Heidelberg. doi:10.1007/9783642134227_9

Cited


Indexed

BibSonomy, DBLP, Google Books, Google Scholar, Journal Seek (Current Mathematical Publications), MathSciNet and Mathematical Reviews,
PubZone, Researchr, Scopus, SpringerLink, UlrichsWeb, WorldCat, Zentralblatt MATH

Metrics

SJR on series “Studies in Computational Intelligence” (2012): sjrrank = 0.223, hindex = 17.
IEEE Digital Library readers for the chapter (October 2015): 3
IEEE Digital Library for the chapter (October 2015): 414

Categories

Computer Science

Charged

Purchase on Springer,
Purchase on Amazon,
Purchase on Holistic Page,
Purchase on BlackWell’s,
Purchase on Krisostomus.


Loc Nguyen (2010). Overview of The System of Acupuncture Spots in Oriental Medicine based on Oriental Philosophy

Loc Nguyen (2009, September 25). Incorporating Bayesian Inference into Adaptation Rules in AHA architecture
Type

Conference Paper

Abstract

Adaptive Hypermedia System (AHS) aims to provide users the adaptation effect based on their characteristics. In other words, AHS ensures that the links that are offered and the content of the information pages are adapted to each individual user. AHA! developed by Debra is an open Adaptive Hypermedia Architecture that is suitable for many different applications; it aims to generic purpose. The architecture of AHA based on Dexter Reference Model has some prominences but the inside user model is built up by overlay method in which the domain is decomposed into a set of elements and the overlay is simply a set of masteries over those elements. Although overlay model is easy to represent user information, there is no inference mechanism for reasoning out new assumptions about user. So I propose a new way to incorporating Bayesian inference into AHA so that it is able to improve modeling functionality in AHA.

Published

Proceedings of 12th International Conference on Interactive Computer aided Learning (ICL2009).
ISBN: 9783899584813
Publisher: Kassel University Press, Kassel, Germany.
Place and date: Villach, Austria. September 2325, 2009.

Identifiers


Links

http://www.iclconference.org/dl/proceedings/2009/archive.htm

Citations

Nguyen, L. (2009, September 25). Incorporating Bayesian Inference into Adaptation Rules in AHA architecture. Proceedings of 12th International Conference Interests Interactive Computer aided Learning (ICL2009). Villach, Austria: Kassel University Press, Kassel, Germany. Retrieved from http://www.iclconference.org/dl/proceedings/2009/archive.htm

Cited


Indexed


Metrics


Categories

Computer Science, Mathematics

Charged

Purchase Proceedings of ICL2009.


Loc Nguyen (2009, August 31). A Proposal Discovering User Interests by Support Vector Machine and Decision Tree on Document Classification
Type

Conference Paper

Abstract

User interest is one of personal traits attracting researchers’ attention in user modeling and user profiling. User interest competes with user knowledge to become the most important characteristic in user model. Adaptive systems need to know user interests so that provide adaptation to user. For example, adaptive learning systems tailor learning materials (lesson, example, exercise, test,...) to user interests. I propose a new approach for discovering user interest based on document classification. The basic idea is to consider user interests as classes of documents. The process of classifying documents is also the process of discovering user interests. There are two new points of view:
 The series of user access in his/her history are modeled as documents. So user is referred indirectly to as “document”.
 User interests are classed such documents are belong to.

Published

The International Workshop on Social Networks Mining and Analysis for Business Applications (SNMABA2009) in conjunction with The 2009 IEEE International Conference on Social Computing (SocialCom2009).
Published in Computational International Conference on Science and Engineering 2009 (CSE ’09), volume 4, pages 809814.
ISBN online: 9780769538235, ISBN print: 9781424453344
Publisher: IEEE
Place and date: Vancouver, Canada. August 2931, 2009.

Identifiers

DOI: 10.1109/CSE.2009.112
INSPEC Accession Number: 10915393
IEEE Article Number: 5283280
SNMABA ID: SNMABA09107
ACM Citation ID: 1633778

Links

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5283280
http://dl.acm.org/citation.cfm?id=1633778

Citations

Nguyen, L. (2009, August 31). A Proposal Discovering User Interests by Support Vector Machine and Decision Tree on Document Classification. The International Workshop on Social Networks Mining and Analysis for Business Applications (SNMABA2009) in conjunction with The 2009 IEEE International Conference on Social Computing (SocialCom2009), 4, pp. 809814. Vancouver, Canada: Computational International Conference on Science and Engineering 2009 (CSE ’09), IEEE. doi:10.1109/CSE.2009.112

Cited

Google Scholar Cited by (October, 2016): 5, 4, 3, 2, 1

Indexed

ACM Digital Library, CrossRef, DBLP, DeepDyve, Google Scholar, IEEE Digital Library, Pubget, Scopus

Metrics

Acceptance rate is 9%
Google Scholar citation on SocialComp (2014): h5index = 19, h5median = 23, workcited = 3
IEEE Digital Library usage (October 2015): 155

Categories

Computer Science

Charged

Purchase on IEEE Digital Library.


Loc Nguyen, Phung Do (2009, July 16). Evolution of Parameters in Bayesian Overlay Model
Type

Conference Paper

Abstract

Adaptive learning systems require wellorganized user model along with solid inference mechanism. Overlay modeling is the method in which the domain is decomposed into a set of elements and the user model is simply a set of masteries over those elements. The combination between overlay model and Bayesian network (BN) will make use of the flexibility and simplification of overlay modeling and the power inference of BN. Thus it is compulsory to predefine parameters, namely, Conditional Probability Tables (CPT (s)) in BN but no one ensured absolutely the correctness of these CPT (s). This research focuses on how to enhance parameters’ quality in Bayesian overlay model, in other words, this is the evolution of CPT(s).
Keywords: adaptive learning, user modeling, user model, learner model, overlay model, Bayesian network, parameter learning.

Published

Proceedings of The 2009 International Conference on Artificial Intelligence (ICAI’09), pages 324329. The 2009 World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP’09).
ISBN: 1601321074, 1601321082 (1601321090)
Editors: Hamid R. Arabnia, David de la Fuente, Jose A. Olivas.
Publisher: CSREA Press USA.
Place and date: Monte Carlo Resort, Las Vegas, Nevada, USA, July 1316, 2009.
Proceedings of The Second International Conference on Information and Communication Technologies and Accessibility (ICTA 2009), page 257268.
ISBN: 9789973375162
Place and date: Hammamet, Tunisia. May 79 2009.

Identifiers


Links

https://goo.gl/QwMYqq

Citations

Nguyen, L., & Do, P. (2009, July 16). Evolution of Parameters in Bayesian Overlay Model. In H. R. Arabnia, D. d. Fuente, & J. A. Olivas (Ed.), Proceedings of The 2009 International Conference on Artificial Intelligence (ICAI’09), The 2009 World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP’09) (pp. 324329). Monte Carlo Resort, Las Vegas, Nevada, USA: CSREA Press USA. Retrieved from https://goo.gl/QwMYqq

Cited


Indexed

AMiner, BibSonomy, DBLP, IET Inspec, PubZone, Research Gate, Researchr, TDG Scholar

Metrics

ICAI’09: Acceptance rate is 27%
CORE ranking on ICAI (2013): C
Google Scholar citation on ICAI (2014): h5index = 8, h5median = 11

Categories

Mathematics, Computer Science


Christoph Fröschl, Loc Nguyen (2009, July 16). State of the Art of Adaptive Learning
Type

Study report

Abstract

The traditional learning with live interactions between teacher and students has achieved many successes but nowadays it raises the demand of personalized learning when computer and internet are booming. Learning is mostly associated with activities involving computers and interactive networks simultaneously and users require that learning material/activities should be provided to them in suitable manner. This is origin of adaptive learning domain. For this reason, the adaptive learning system (ALS) must have ability to change its action to provide learning content and pedagogic environment/method for every student in accordance with her/his individual characteristics. Adaptive systems are researched and developed for a long time; there are many kinds of them. So it is very difficult for researchers to analyze them. In this study report, I collect scientific resources to bring out an overview of adaptive learning systems along with their features. Main reference is the master thesis “User Modeling and User Proﬁling in Adaptive Elearning Systems” of author Christoph Fröschl. I express my deep gratitude to the author Christoph Fröschl for providing her/his great research.
Keywords: adaptive learning, user model, learner model, intelligent tutoring system, adaptive educational hypermedia system, AEHS.

Published

Proceedings of The 2009 International Conference on eLearning, eBusiness, Enterprise Information Systems, and eGovernment (EEE 2009), pages 126133. The 2009 World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP’09).
ISBN: 1601321007
Editors: Hamid R. Arabnia, Azita Bahrami, and Ashu M. G. Solo.
Publisher: CSREA Press USA.
Place and date: Monte Carlo Resort, Las Vegas, Nevada, USA, July 1316, 2009.

Identifiers


Links

https://goo.gl/Xn39eN

Citations

Fröschl, C., & Nguyen, L. (2009, July 16). State of the Art of Adaptive Learning. In H. R. Arabnia, A. Bahrami, & A. M. Solo (Ed.), Proceedings of The 2009 International Conference on eLearning, eBusiness, Enterprise Information Systems, and eGovernment (EEE 2009). The 2009 World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP’09) (pp. 126133). Las Vegas, Nevada, USA: CSREA Press USA. Retrieved from https://goo.gl/Xn39eN

Cited

Google Scholar Cited by (February, 2018): 1

Indexed

BibSonomy, DBLP, IET Inspec, Research Gate

Metrics

Acceptance rate is 29%
CORE ranking on WORLDCOMP (2013): C

Categories

Computer Science, Education


Loc Nguyen, BichThuy T. Dong (2009, July 10). ZEBRA: A new User Modeling System for Triangular Model of Learners’ Characteristics
Type

Conference Paper, Conference Poster

Abstract

The core of adaptive system is the user model that is representation of information about an individual. User model is necessary for an adaptive system to provide the adaptation effect, i.e., to behave differently for different users. The system that collects user information to build up user model and reasons out new assumptions about user is called user modeling system (UMS). There are two main tendencies towards implementing UMS: domainindependent UMS and domaindependent UMS. The latter is called generic UMS known widely but our approach focuses on the domaindependent UMS applied into adaptive elearning especially. The reason is that domainindependent UMS is too generic to “cover” all learners’ characteristics in elearning, which may cause unpredictable bad consequences in adaptation process. Note that user is considered as learner in elearning context. Many users’ characteristics can be modeled but each characteristic is in accordance with respective modeling method. It is impossible to model all learners’ characteristics because of such reason “there is no modeling method fit all characteristics”. To overcome these obstacles and difficulties, we propose the new model of learner “Triangular Learner Model (TLM)” composed by three main learners’ characteristics: knowledge, learning style and learning history. TLM with such three underlying characteristics will cover the whole of learner’s information required by learning adaptation process. The UMS which builds up and manipulates TLM is also described in detail and named Zebra. We also propose the new architecture of an adaptive application and the interaction between such application and Zebra.
Keywords: user modeling system, adaptive learning, user model, learner model, Triangular Learner Model, TLM, Zebra.

Published

AIED 2009: 14th conference on Artificial Intelligence in Education, Proceedings of the Workshop on “Enabling creative learning design: how HCI, User Modeling and Human Factors Help”, pages 4251.
ISBN online: 9781607504467, ISBN print: 9781607500285. DOI: 10.3233/9781607500285813
Editors: George D. Magoulas, Patricia Charlton, Diana Laurillard, Kyparisia Papanikolaou, Maria Grigoriadou.
Publisher: IOS Press Amsterdam, The Netherlands, The Netherlands.
Conference place and date: Thistle Hotel, Brighton, UK. July 610, 2009.
Poster at The 11th HumanCentred Technology Postgraduate Workshop at the University of Sussex.
Place and date: University of Sussex, United Kingdom. July 13, 2009.
Website: http://eventseer.net/e/10209

Identifiers


Links

https://goo.gl/cVzC6h
http://www.dcs.bbk.ac.uk/~gmagoulas/LD_WorkshopsProceedingsweb.pdf#page=46

Citations

Nguyen, L., & Dong, B.T. T. (2009, July 10). ZEBRA: A new User Modeling System for Triangular Model of Learners‘ Characteristics. In G. D. Magoulas, P. Charlton, D. Laurillard, K. Papanikolaou, & M. Grigoriadou (Ed.), AIED 2009: 14th conference on Artificial Intelligence in Education, Proceedings of the Workshop on “Enabling creative learning design: how HCI, User Modeling and Human Factors Help” (pp. 4251). Brighton, United Kingdom: IOS Press Amsterdam, The Netherlands, The Netherlands. Retrieved from https://goo.gl/cVzC6h

Cited


Indexed

Academia.edu, ACM Digital Library, BibSonomy, BIROn, DBLP, Google scholar, Microsoft Academy Search, Research Gate

Metrics

CORE ranking on AIED (2013): A
Google Scholar citation on AIED (2014): h5index = 15, h5median = 25
Zaïane (2011): Top Tier Conferences

Categories

Computer Science


Loc Nguyen, Phung Do (2009, June 3). Learning Concept Recommendation based on Sequential Pattern Mining
Type

Conference Paper

Abstract

Sequential pattern mining is new trend in data mining domain with many useful applications, especially commercial application but it also results surprised effect in adaptive learning. Suppose there is an adaptive elearning website, a student access learning material / do exercises relating domain concepts in sessions. His learning sequences which are lists of concepts accessed after total study sessions construct the learning sequence database S. S is mined to find the sequences which are expected to be learned frequently or preferred by student. Such sequences called sequential patterns are use to recommend appropriate concepts / learning objects to students in his next visits. It results in enhancing the quality of adaptive learning system. This process is sequential pattern mining. In paper, I also suppose an approach to break sequential pattern s=<c_{1}, c_{2},..., c_{m}> into association rules including lefthand and righthand in form c_{i}→c_{j}. Lefthand is considered as source concept, righthand is treated as recommended concept available to students.

Published

Proceedings of The 2009 Third International Digital Ecosystems and Technologies Conference (IEEEDEST 2009), pages 6671.
IEEE Catalog Number: CFP09DESCDR, ISBN online: 9781424423460, ISBN print: 9781424423453, Library of Congress: 2008902855
Editors: Elizabeth Chang, Farookh Hussain, and Erdal Kayacan.
Publisher: IEEE
Place and date: Harbiye Military Museum and Cultural Center, Istanbul, Turkey, May 31  June 3, 2009.

Identifiers

DOI: 10.1109/DEST.2009.5276694
INSPEC Accession Number: 10904985
IEEE Article Number: 5276694

Links

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5276694

Citations

Nguyen, L., & Do, P. (2009, June 3). Learning Concept Recommendation based on Sequential Pattern Mining. In E. Chang, F. Hussain, & E. Kayacan (Ed.), Proceedings of The 2009 Third International Digital Ecosystems and Technologies Conference (IEEEDEST 2009) (pp. 6671). Istanbul, Turkey: IEEE. doi:10.1109/DEST.2009.5276694

Cited

Google Scholar Cited by (October, 2015): 4, 3, 2, 1

Indexed

CrossRef, IEEE Digital Library, IET Inspec, Google Scholar, Library of Congress, Microsoft Academy Search, Pubget, Scopus, Thomson Reuters CPCI

Metrics

CORE ranking (2013): C
Google Scholar citation (2014): h5index = 13, h5median = 15, workcited = 2
IEEE Digital Library usage (October 2015): 100

Categories

Computer Science

Charged

Purchase on IEEE Digital Library.


Loc Nguyen, Phung Do (2009). Combination of Bayesian Network and Overlay Model in User Modeling
Type

Conference Paper, Journal Article

Abstract

The core of adaptive system is user model containing personal information such as knowledge, learning styles, and goals which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, and plan recognition but they don’t bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidences collected during user’s learning process.
Keywords: Bayesian network, overlay model, user model.

Published

Proceedings of 4th International Conference on Interactive Mobile and Computer Aided Learning (IMCL 2009).
Place and date: Princess Sumaya University for Technology, Amman, Jordan, April 2224 2009.
Proceedings of 9th International Conference on Computational Science (ICCS 2009), Lecture Notes in Computer Science (LNCS), Volume 5545, 2009, pages 514.
ISBN13: 9783642019722, ISSN: 03029743. Series ISSN: 03029743.
Editors: Gabrielle Allen, Jarosław Nabrzyski, Edward Seidel, Geert Dick van Albada, Jack Dongarra, and Peter M. A. Sloot.
Publisher: Springer.
Conference place and date: Baton Rouge, Louisiana, USA. May 2527, 2009.
Website: http://www.iccsmeeting.org/iccs2009
International Journal of Emerging Technologies in Learning (iJET), Vol. 4, No. 4 (2009), pages 4145.
ISSN online: 18630383, ISSN print: 18688799, Open Access.
Editor: Michael E. Auer.
Publisher: International Association of Online Engineering (IAOE).

Identifiers

DOI (ICCS 2009): 10.1007/9783642019739_2
DOI (iJET): 10.3991/ijet.v4i4.684
ACM Citation ID: 1561116

Links

http://onlinejournals.org/ijet/article/view/684
http://link.springer.com/chapter/10.1007%2F9783642019739_2

Citations

Nguyen, L., & Do, P. (2009). Combination of Bayesian Network and Overlay Model in User Modeling. (M. E. Auer, Ed.) International Journal of Emerging Technologies in Learning (iJET), 4 (4), 4145. doi:10.3991/ijet.v4i4.684

Cited

Google Scholar Cited by (March, 2017): 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5 (Rim, Web Aplication and Cloud Computing), 4, 3, 2, 1

Indexed

ACM Digital Library, AMiner, BibSonomy, Cite Seer X, DBLP, DOAJ, EBSCO, EdITLib, Ei, Google Scholar, Google Book, IET Inspec, Microsoft
Academic Search, PubZone, Researchr, Scopus, SpringerLink, Thomson Reuters  ESCI (iJET), UlrichsWeb, WorldWideScience, Zotero

Metrics

SJR on iJET (2012): sjrrank = 0.169, hindex = 4
CORE ranking on ICCS (2013): A
Google Scholar citation on ICCS (2014): h5index = 18, h5median = 27, workcited = 11
Google Scholar citation on iJET (2014): h5index = 11, h5median = 16, workcited = 10

Categories

Computer Science, Mathematics

Charged

Purchase on Springer.


Christoph Fröschl, Loc Nguyen, Phung Do (2008, November). Learner Model in Adaptive Learning
Type

Study report

Abstract

Every student has individual features such as knowledge, goals, experiences, interests, backgrounds, personal traits, learning styles, learning activities, and study results. User model or learner model is constructed from these features. The process to build up learner model is called user modeling process or learner modeling process. Adaptive learning system uses learner model to make adaptation. In other words, adaptive learning system takes advantages individual information available in learner model in order to tailor learning materials (lessons, exercises, tests, etc.) and teaching methods to each student. Anyway, learner model is very important to adaptive learning system and other adaptive applications. This study report focuses on learner model, which is extracted from the master thesis of “User Modeling and User Proﬁling in Adaptive Elearning Systems” of author Christoph Fröschl. I express my deep gratitude to the author Christoph Fröschl for providing her/his great research.
Keywords: learner model, user model, user modeling, learner modeling, adaptive learning.

Published

Proceedings of The 2008 World Congress on Science, Engineering and Technology (WCSET2008), volume 35, November 2008, pages 396400. Publication date is November 21  23, 2008.
ISSN: 20703740, Open Access.
Publisher: World Congress on Science, Engineering and Technology (WASET).
Place and date: Paris, France, November 21  23, 2008.

Presented

The 5th International Conference on Information Technology in Education and Training (IT@EDU08).
Publisher: University of Information Technology (UIT), Ho Chi Minh city, Vietnam.
Place and date: Ho Chi Minh city, Vietnam, 2008.

Identifiers


Links

https://goo.gl/MEZVJ2

Citations

Fröschl, C., Nguyen, L., & Do, P. (2008, November). Learner Model in Adaptive Learning. The 2008 World Congress on Science, Engineering and Technology (WCSET2008), 35, pp. 296400. Paris, France: World Congress on Science, Engineering and Technology (WASET). Retrieved from https://goo.gl/MEZVJ2

Cited


Indexed


Metrics


Categories

Computer Science, Education


Loc Nguyen (2006, January). Image Retrieval by MMM Model on Combination of Image LowLevel Features and HighLevel Semantics (Truy tìm ảnh qua mô hình MMM kết hợp giữa đặc trưng cấp thấp và ngữ nghĩa cấp cao của ảnh)
Type

Journal Article

Abstract

Recent image retrieval method focuses on combine lowlevel features and highlevel semantics of image in query processing. This paper introduces a new efficient method of image retrieval system. This method called MMM (Markov model mediator) takes into consideration lowfeatures and highlevel semantics of images. Lowfeatures of images are extracted from color histogram and image segmentation. Highlevel semantics is learned from the history of user’s access pattern and access frequencies on the images. Lowfeatures and highlevel semantics of images are combined by Markov model mediator. Note that MMM was proposed by authors MeiLing Shyu, ShuChing Chen, Na Zhao, Min Chen, Chengcui Zhang, Kanoksri Sarinnapakorn. I only implement MMM as a computer software and test MMM. The computer software is named AGmagic.
Key words: color histogram, highlevel semantics, image retrieval, image segmentation, lowlevel features, Markov model mediator.
Những phương pháp truy tìm ảnh gần đây đều tập trung vào việc kết hợp giữa đặc trưng cấp thấp và ngữ nghĩa cấp cao của ảnh trong quá trình truy tìm. Bài báo này giới thiệu một phương pháp truy tìm ảnh mới khá hiệu quả, phương pháp này được gọi là MMM (Markov Model Mediator). MMM sẽ tập trung cả vào đặc trưng cấp thấp và ngữ nghĩa cấp cao của ảnh. Đặc trưng cấp thấp sẽ được rút trích qua lược đồ màu và phân đoạn ảnh. Ngữ nghĩa cấp cao sẽ được “học” từ lịch sử của những mẫu truy cập và tần số truy cập của người dùng. Đặc trưng cấp thấp và ngữ nghĩa cấp cao sẽ được kết hợp qua mô hình Markov. Lưu ý rằng MMM đã được nghiên cứu và đề xuất bởi các tác giả MeiLing Shyu, ShuChing Chen, Na Zhao, Min Chen, Chengcui Zhang, Kanoksri Sarinnapakorn. Tôi chỉ thi công MMM thành một phần mềm máy tính và thử nghiệm MMM. Phần mềm này được đặt tên là AGmagic.
Từ khóa: lược đồ màu, ngữ nghĩa cấp cao, truy tìm ảnh, phân đoạn ảnh, đặc trưng cấp thấp, môi giới mô hình Markov.

Published

An Giang University Journal of Science, Volume 25, pages 4550. Publication date is January, 2006.
ISSN: 08668086.
Editors: TongAnh Vo, MyPhuong Thanh Ho.
Publisher: An Giang University.

Identifiers


Links

https://goo.gl/rP2tsZ

Citations

Nguyen, L. (2006, January). Image Retrieval by MMM Model on Combination of Image LowLevel Features and HighLevel Semantics (Truy tìm ảnh qua mô hình MMM kết hợp giữa đặc trưng cấp thấp và ngữ nghĩa cấp cao của ảnh). (T.A. Vo, & M.P. T. Ho, Eds.) An Giang University Journal of Science, 25. Retrieved from https://goo.gl/rP2tsZ

Cited


Indexed


Metrics


Categories

Computer Science

Last updated February 2019
Abstracting and indexing
