Last Update: 4.6.2024 Home Effloresce Cordis Effloresce Cerebrum Effloresce Stomachum Effloresce Oculos Effloresce Nuntium Effloresce Auribus News Media Samples Development Status / Road Map |
Effloresce CerebrumThink of Effloresce Cerebrum as the API bridge designed in C++ between Effloresce Cordis and Effloresce Stomachum, its three major roles are to collect and process data for a first render for Effloresce Oculos, Morus and Nuntium, and collect and organize the data on Effloresce Stomachum, then provide a render package to Cordis. We are using Caffe as the main deep learning framework, Caffe was developed by the Berkeley Vision and Learning Center (BVLC) that is written in C++ and designed for speed, modularity, and scalability. It provides a flexible architecture for designing and training deep neural networks, with support for convolutional neural networks (CNNs), recurrent neural networks (RNNs), and more. Caffe supports data augmentation techniques to increase the diversity and size of the training dataset, which helps improve the generalization and robustness of the trained model. Data augmentation can be applied directly within Caffe using data augmentation layers or as part of the preprocessing pipeline before feeding the data into the network. Next Steps:
Work Flow:Pillar Data - Machine LearningPillars are the foundation of how we store data and bring data in to cerebrum for the render files, this process is started with data collection. Effloresce cerebrum is always learning via its own prompts with a program called brainstorm, this program creates prompts for cerebrum using internet data collection and a robot collector, it can also be manually prompted. (If Oculos is sent a GUI prompt it is sent to cerebrum with a top priority and brainstorm is run, first checking if it already has created pillars for a prompt) A Oculos prompt will use Dlib frameworks of: Feature extraction, Object tracking, facial Expression, shape prediction, image segmentation, object detection, facial landmark detection, face detection and lesser Dlib vision frame works. It will pull the pillars for the prompt and match the pillars to datasets in SQL to create a render file, that will be sent to cordis AI for a second render Here is a very simplified version of 7 pillars (when cerebrum pulled this in March 2024, it used 199 different pillars) of how pillars work for the prompt: "closeup portrait of a pretty 21 year old Caucasian white girl with freckles, long black hair" Pillar 1: Type: Closeup PortraitPillar 2: Pretty Pillar 3: Age Found: 21 Pillar 4: Caucasian Pillar 5: white girl Pillar 6: Freckles Pillar 7: Long Black Hair Pillar 1 pulled will give cerebrum 1.9 million samples of what "closeup portrait" means and will have data sets for how many times each is used, and if this Pillar has been used in the past with other Pillers pulled in this render. With the pillars compiled and cordis computes and oculos renders if a change happens in the GUi prompt the changes are sent back to cerebrum and re-compiled and recorded for future reference and stores user data and statistics to suport the user to their preference. Pillar Framework Examples:We have found that "age" is tricky for Cerebrum and we are working on creating a much more defined idea of output based on age. This is a sample of the alpha version of age pillers for ages 13 to 60 the 20s - 30s are hard when using other definitions like "pretty" because of database bias. |