Conversation article published

I wrote an article for The Conversation about misconceptions around AI and tried to also incorporate some introductory information about Fuzzy Logic.  The Conversation is an interesting medium to publish through, aimed at a general audience.

Tim Wilkin receives Alfred Deakin medal for best doctoral thesis

Some great news today that our colleague and co-author of some recent papers, Tim, has received the Alfred Deakin medal for his recent thesis on non-monotonic averaging.  It was very easy being an associate supervisor on this thesis and it is a very nice validation of the worth of Tim’s work (and our research on aggregation in general).  Well done, Tim!

2 Journal papers accepted

We have been fortunate enough to have the following papers accepted. The first, which was accepted to the Journal of Applied Soft Computing, is a contribution that we were invited to work on after presenting some work at FUZZIEEE in Beijing (A special session run by Paco Chiclana, Enrique Herrera-Viedma, Jian Wu and Yucheng Dong. I think there are a lot of interesting problems in this area – although we focus on some different aspects rather than the entire process. The second paper was accepted to Information Sciences and is based on collaborative research with the Environmental Ecology group at Deakin University. A nice surprise in this paper was a real dataset which was genuinely suited to the Bonferroni mean.

Title: Unifying approaches to consensus across different preference representations

Authors: Gleb Beliakov, Simon James

Consensus measures can be useful in group decision making problems both to guide users toward more reasonable judgements and to give an overall indication of the support for the final decision. The level of consensus between decision makers can be measured in contexts where preferences over alternatives are expressed either as evaluations or scores, pairwise preferences, and weak orders, however these different representations often call for different approaches to consensus measurements. In this paper, we look at the distance metrics used to construct consensus measures in each of these settings and how consistent these are for preference profiles when they are converted from one representation to another. We develop some methods for consistent approaches across decision making settings and provide an example to help investigate differences between some of the commonly used distances.

Title: Using aggregation functions to model human judgements of species diversity

Authors: Gleb Beliakov, Simon James and Dale G. Nimmo


In environmental ecology, diversity indices attempt to capture both the number of species in a community and the relative abundance of each.  Many indices have been proposed for quantifying diversity, often based on calculations of dominance, equity and entropy from other research fields.  Here we use linear fitting techniques to investigate the use of aggregation functions, both for evaluating the relative biodiversity of different ecological communities, and for understanding human tendencies when making intuitive diversity comparisons.  The dataset we use was obtained from an online exercise where individuals were asked to compare hypothetical communities in terms of diversity and importance for conservation.

Generating all weak orders

In a group decision making paper I am working on, one stage of the experimentation required the generation of weak orders.  I came across a nice way of generating complete orders (solution by tennenrishin), which I had needed to do before, however partial orderings seemed a bit harder, especially trying to generate all orders for any given set of objects.  My solution ended up building it from binary sets and, although it requires a checking function, I think in the end it’s quite succinct.

Generate all weak orders from input n

 weak.orders <- function(n) {
all.bins <- list()
for(j in 2:n) {
	bin.list <- array(0,0)
	for(i in 1:(2^j-2)) {
bin.list <- rbind(bin.list,as.numeric(intToBits(i))[1:j]) }
	all.bins[[j]] <- bin.list}

weak.orders <- rbind(array(0,n),all.bins[[n]])
for(l in 1:(n-1)) {
nwo <- nrow(weak.orders)
for(j in 2:(n-1)) {
bin.2add <- all.bins[[j]]
for(i in 1:nwo) {
if(sum(weak.orders[i,weak.orders[i,]==l])==l*j) {
for(k in 1:(nrow(bin.2add))) {
add.this <- weak.orders[i,]
weak.orders <- rbind(weak.orders,add.this)}

Appointment as associate editor of IEEE Transactions on Fuzzy Systems

Was quite chuffed earlier in the year as I was invited to serve as an AE for IEEE TFS. IEEE Transactions on Fuzzy Systems is one of the most highly regarded journals in our field. It maintains high quality in its articles with rigorous peer-review standards and I am looking forward to being more involved!

KBS paper and RS Book Chapter update accepted

Received two pieces of good news today, with a new paper submitted to Knowledge-Based Systems having been accepted and also an update of our contribution to the Recommender Systems Handbook.

Title: A penalty-based aggregation operator for non-convex intervals

Authors: G. Beliakov and S. James


In the case of real-valued inputs, averaging aggregation functions have been studied extensively with results arising in fields including probability and statistics, fuzzy decision-making, and various sciences. Although much of the behavior of aggregation functions when combining standard fuzzy membership values is well established, extensions to interval-valued fuzzy sets, hesitant fuzzy sets, and other new domains pose a number of difficulties.
The aggregation of non-convex or discontinuous intervals is usually approached in line with the extension principle, i.e. by aggregating all real-valued input vectors lying within the interval boundaries and taking the union as the final output. Although this is consistent with the aggregation of convex interval inputs, in the non-convex case such operators are not idempotent and may result in outputs which do not faithfully summarize or represent the set of inputs. After giving an overview of the treatment of non-convex intervals and their associated interpretations, we propose a novel extension of the arithmetic mean based on penalty functions that provides a representative output and satisfies idempotency.


Title: Aggregation functions for recommender systems (in Recommender Systems Handbook, 2nd Ed. by Springer)

Authors: G. Beliakov, T. Calo and S. James


This chapter gives an overview of aggregation functions and their use in recommender systems.  The classical weighted average lies at the heart of various recommendation mechanisms, often being employed to combine item feature scores or predict ratings from similar users.  Some improvements to accuracy and robustness can be achieved by aggregating different measures of similarity or using an average of recommendations obtained through different techniques.  Advances made in the theory of aggregation functions therefore have the potential to deliver increased performance to many recommender systems.    We provide definitions of some important families and properties, sophisticated methods of construction, and various examples of aggregation functions in the domain of recommender systems.


Consensus and Evenness measures Summer Project

Over the Summer, Laura undertook a project with the IPCI lab on Measures of Consensus and Ecological Evenness.  She presented her results at a seminar earlier this year and her report led to the MDAI contribution.  We look forward to Laura continuing her research in this area.