Traditional peer-reviewing is a process whereby submissions by various scientists are selected based on certain criteria passed on to reviewers by organizers of conferences or editors of journals. This process has been used to maintain the quality of the works being presented and also to help grouping reports relevant to a given community (or topic). However, certain scientific opinions and theories compete and have partisans. Common examples of such competitions appear when deciding the most important metric in classification algorithms, what to use as a basis for recommendation algorithms, the best predicting models for a known phenomena, to name a few. The common assumption is that the community will be equally informed about the arguments of all involved studies, in order to come out with objective conclusions. This assumption is reasonable when partisans of each competing opinion can eventually review and recommend for publication the studies that agree with their perspective. In its turn, this can be expected to eventually happen whenever expert reviewers are randomly assigned to corresponding papers. However in recent years we have seen that power-law distributions instead of randomness are present in many social relationships. In this study we investigate what happens in the world of peer-reviewing, more specifically in a network of reviewing relations for an open review journal. We found that a power-law distribution is indeed present, as a small group of reviewers evaluates a significant fraction of all submissions. The problem however is that this is undesirable since these "hubs" have an unmatched influence on what gets published. This experiment presents a first case where arguably the power-law structure of the social network can be considered as an overall negative factor. It also supports an argument for employing the social graph of reviewers as an additional metric of the quality of a journal/ conference.