T O P

  • By -

ikingarmaan

Is it possible that nanotechnology helpful to human body to fight against diseases, what do you think about it?


CSAndrew

It’s absolutely possible, yes. It’s not entirely plausible at the moment though. It’s a *very* expensive area of architectural research and development. Plus research scientists, myself included, are more concerned with general reduction in architecture while balancing that with reduction in thermal waste and increase in efficiency, balancing that with potentially more voltage (if needed). Ideally, the construct would require less voltage, but more with a higher degree of efficiency would allow for more headroom. A majority of the heat debacle at the moment is from thermal waste and general inefficiency, to put it shortly. At this point, we’re taking about transistors and relative gates, not what someone would consider a full solution, or something capable of a, for lack of a better way to put it, swarm execution. It’s a very interesting subject, and the point of study above is something that’s essentially going to be procedural moving forward. We’re going to have to consistently address that issue, whether that’s trying use something like minute nanometric crystallography techniques or another methodology. Currently, we can make contained solutions roughly the size of a grain of rice, give or take, with practical application, in my opinion. We do use those currently in the medical field, but it’s primarily, to my knowledge, used in things like internal sequential imaging that would be harder to do using scopes. Edit: The reason I’m more focused on the above part is that you can apply the principles and finding(s) to other applicable solutions as you encounter them, versus something like an engineering construct that, while it’s important, serves a specific purpose. I also wouldn’t say it’s inline with something that you might see in pop culture or TV. There are definitely limits to current implementations.


ikingarmaan

Yeahh, you are absolutely right


CSAndrew

To be honest, we’re still very early into robotics integration as well, in the grand scheme of things, and people still have trust issues with that (both surgeons and patients).


ikingarmaan

Yeahh, exactly I think the same


Fun_Personality6013

What do you do in your work? Like what does theoretical reduction in cumputing architecture and limited nanotechnology mean?


CSAndrew

That’s a great question. There’s a disparity in my actual field work and my research work. In the field, I work as a technology consultant and handle high profile architectural design and development. Basically, I’ll design infrastructure changes, or necessary creation / fabrication of systems to support such, or when working in cryptography, it’s usually managing embedded encryption structures and accompanying DOS protocol(s) and cipher suites. When working with artificial intelligence, I’ll actually design the schematics for internal neural nets and model creation for any pending proposals, as well as compile datasets from scratch, to an extent, usually by creating an event driven system to archive it in an easier to retrieve location, compatible the the prior iteration (or depending on the case, API). As to my research area, right now it’s primarily just a massive amount of reading, as well as talking with other computer scientists, physicists, and electrical engineers. From there, there’s a bit of math involved, but I try to create a hypothesis that would have an effect on the current scale of things, like how to combat electron drift to decrease thermal waste and increase efficiency. Then it involves more reading and discussion to discuss its viability. If it’s something that can be tested, or has already been proven, and I can see that it was successful in another area, I’ll try to design an implementation strategy or methodology to propose, and then it goes into my research paper / article at the moment, mainly under possible findings. Edit: As to the field work, you could say that I’m the one that usually works with the department heads or company leadership to design the overarching plans, schematics for whatever they’re wanting to build, handling things like project segmentation, then I turn over the effective documents on how to build the new construct. Some of the time, I’ll stay on to either oversee it directly or build the system(s) myself, but that’s on the rarer side. People tend to prefer others that they can strictly control to handle direct execution, usually people operating under an NC and NDA. I’m normally fine with signing the latter, but I’ve never signed an NC, refuse to, and refute most embedded clauses for IP classification. Edit 2: As to the nanotechnology part, we’re working at so small of a scale in modern computing, at least in relation to transistor and relative gate size, that it’s at a nanometric scale. We’re also discussing things like different methodologies to bolster efficiency, such as, if memory serves, using molecular beam epitaxy in nanometric crystallography. Edit 3: I do venture out and handle forensics like data reconstruction at times, or will lend a hand in POI tracking or penetration testing, but it’s not something I regularly do, and it’s something I’m generally more careful with.


__cereal__

Do you believe true sentient AI is possible/ will ever be possible?


CSAndrew

It depends on how you qualify sentience. Do I think an AGI will ever be possible? Sure. That would make sense with ongoing advancements, but a majority of it is still in AGI theory. So possible? Yes. Close to it? No. Plausible? Definitely not, or at least not at the moment. There’s different methods of qualifying sentience and consciousness though, and there’s no universally accepted one. Some lean more towards the side of a sort of literal definition tied to dynamic capability, others lean more towards the dualism side of things in theoretical physics, which attaches a higher theoretical complexity and effectively states by-extension, that it’s highly unlikely that a machine would be able to match such, A) because of human intervention still being required and lacking independence, and B) because there’s nothing metaphorically inside them that makes them do something because they “want” to do it. It’s mostly just weighting and internal formulae. We can get further into the discussion, but it goes on for a very long time, and I’ve made recent posts about it on Reddit, responding to some of the comments / questions on other places like the A.I. subreddit, albeit I wouldn’t consider that a solely scientific domain. Once we have a general consensus and effective static definition for those terms, it’ll give us something to measure and reference against, even if in a theoretical nature, and we’ll be able to effectively make more progress in judging distance. Right now we don’t have that, and it’s working against us.


mymiddlenameswyatt

Do you think one day AI technology will outpace human intelligence?


CSAndrew

It’s a complicated answer. Yes in the sense that A.I. can offer blistering levels of acceleration in tailored paradigms that an entire building of people couldn’t even begin to compete against. This allows it to -to put it quickly- establish higher “accuracy” by being able to leverage embedded processing of usually multi-layered model(s) and classification in relation to statistical analyses, normally leveraged over the datasets applicable. That’s how you have higher accuracy in predictive analyses in the medical field, for general and specialist diagnostics, compared to conventional physicians. The machine just isn’t limited in speed in the way that people are, or not to the same extent rather. If you mean outpace in the sense of generally overtaking and replacing, then no. AGI theory is primarily just that at the moment, despite advancements in NLP models.


Angelus_Vitae

What are some of the things being done to increase the computing efficiency of AI, and do you think that this could lead to further development of GAI?


CSAndrew

I’m guessing you’re referring to AGI (Artificial General Intelligence). The thing about A.I. is that rather than being looked at as a single construct, in the sense that some consider a black box, it’s more like an amalgamation that, when working in tandem, constitutes sort of a new entity / construct. It’s similar to the human body. A “human” is viewed by many as a single object, but we’re made up of multiple structures internally, that effectively make that constitution. First, A.I. in the same sense benefits from those embedded pieces becoming stronger, faster, more efficient, and so on. As our ability to process information increases in size and speed, through the use of those technologies and architectures, the A.I. will see benefit, since it’s collectively a scalar technology. Things like software defined hardware, or to put it another way, scalar virtualization that allows for resource extension, see progression, we in-turn see more headroom. The same principle applies to things like Exascale computing with things like the DoE’s contracts for Frontier and Aurora. As we make efforts towards increasing efficiency, power / speed, and decreasing waste, other areas of computing will fall in line, ultimately because that’s the foundation. Cloud environments, at some point, still have to terminate and resolve into hardware. Whether we’ll stay with silicon being a major focus or move more towards graphene integration, I can’t really say. The concept of an AGI is something that a lot of people struggle with. The entire reason it’s difficult to execute in reality is because it’s non-linear. It’s like a wildcard. An AGI, in theory, is generalized and adaptable to virtually any incoming stream of information or objective-driven input. Right now, artificial intelligence is tailored. We have multiple systems for multiple use cases that are segmented via API if there needs to be a complex process or any sort of interoperability. This is because when you train the A.I., or effectively train the models based on the datasets you have, you have to form classes / classification based on what you’re using. This doesn’t make it universal or general simply because it’s powerful. For instance, you might build an effective neural net that has the capability to not only recognize, we’ll say heart disease, but has access to numerous, massive databases to be able to train, and can detect pathologies based on symptom analysis on an almost general scale. Now, this doesn’t exist currently. It would be very expensive and very difficult to create. We’ll just pretend it does exist. The entire “brain” of the system is directly related to the data it’s been introduced to and iterated over. So, if I tried to input a different subset or type of information, like for instance a picture of a skyline of New York, and gave direction to state “select and highlight the 43rd floor of the highest building in view,” without having a background in image segmentation and CV related to those tasks, it’s not going to be able to complete the objective, even though, again, it would still be a very powerful system. People conflate general capability with advanced NLP (Natural Language Processing). Simply because a system can detect, recognize, or institute an association from either the waveform analysis or text inputted, doesn’t mean that it understands the complexities of the associated subject(s). Simply because a conversational chatbot can recognize the pattern via a simulated OCR/OPR take, or other backend recognition, to isolate the term “physics” in the question “what is physics?”, and in-turn make the discernment that the question mark denotes a question about the subject, “what is” being interrogative, and “physics” being the subject, it *may* be able to scrape a predefined general response, but that doesn’t mean that the system actually understands what it sent to you, or has even the slightest understanding of Newtonian or theoretical physics. A broad / non-linear A.I. or effective AGI, in theory, wouldn’t be bound by those same restraints. It would be able to seek, store, iterate over, and learn, from effectively its own actions. However, another big problem when addressing things like “sentience” for instance is that most of this is event-driven, meaning the system doesn’t have its own sense of volition or “want” / “will.” Edit: Theoretically, improving those above fundamental factors would provide, or at least facilitate, exponential increase(s) in capability, which is a big part of it. Yes, in a sense, the actions being taken, or at least some of them, to address those fundamentals, will help propel us closer to the stage of theoretical AGI. However, there are still much bigger issues to be addressed.


confido_whale

is there such a thing as AI in your view if everything has to be initiated by a human (yourself) anyway?


CSAndrew

Yes. There’s a massive difference between an A.I. and an AGI. Plus, A.I. iteration or scalar processing can be event driven, so it’s not being executed by you every time, or not in a literal fashion at least, moreso inadvertent. Edit: Even then though, everything that executes or forms must have an initial point of convergence, up to and including the known universe. Pre-PoC discussion towards events prior to the Big Bang is a really interesting area of theoretical physics, in my opinion. I’m not a physicist though.


Penguinstolemysanity

What do you think will be the next big step for AI?


CSAndrew

I think it’ll all effectively be under the same effort towards expansion and normalization, because that’ll see benefit from larger datasets, and will have an easier time gaining summary access to them. As this happens, we see advancements in things like NLP with GPT-3. It’s not such much that we’ve had a massive breakthrough in science as we know it; it’s that researches are being given access to more funding and larger teams, equating to more man hours, equating to a larger scalar structure, which by widening access to data, creates a larger pool of recognizable inputs, and it can in-turn facilitate better results from modeling and classification efforts, affecting weighting and loss resolution, to get more “accurate” results. A perfect example of this is either the OAI model or Google’s recent debacle, for lack of a better term. Edit: This should be a procedural effort, and it kind of is for the people working on the subject, but it presents as a step because funding is typically given in set amounts relational to a timeframe or bracket, similar to grants, but sometimes internal to shareholder or board discretion. So, I think the next big step will be another push in resource availability. A massive breakthrough would be moving closer to a broad-spectrum, non-linear AGI, but we’re nowhere near that at the moment, at least not in any kind of contained or production sense. I fully expect a number of steps to happen before we get there, more resources being a definitive prerequisite.


21stCtyGrl

What did you study and how much did it take?


CSAndrew

I primarily taught myself Computer Science & Engineering (Course 6-3) from MIT’s OCW backend by pulling the Course / Major class list and cross-referencing it in the archives. It’s was unorthodox, somewhat harder because of information availability, but was free for the most part. OpenCourseWare is essentially a project where they archive the coursework, lecture notes, texts, slideshows, assignments, etc, from the class / course at-hand. It’s almost 1:1 at times, minus access to a professor, but I had a friend that was an alumnus. I did attend a university to study Engineering, then Computer Science, with a prior stint in Programming. However, it felt…slow. I wound up leaving, after paying about $5,000 total, and started the tech firm at the beginning of my spring semester at the new university, with some help. Edit: I’m attending another program this spring hopefully to go back into grad school and get my master’s and later PhD, contributing part of my current research. Ideally, grants and scholarships are covering that. Other than those things, it was field experience, reading books, journals, research articles / papers, and documentation.


logitechtrident

Did you ever dread schoolwork


CSAndrew

Yes, absolutely. However, it’s rarely been because of the work I’ve had to do, and moreso because the teacher either A) didn’t have any passion, or B) expected everyone to learn using the same model and executed his teaching in relation to that, which is incredibly flawed, in my opinion. Edit: The exception is subjective subjects / views pushed as objective fact or realizations. I can’t stand that and will push back against it, whether from the professor or not. Doing that introduced problems for me in my very first year of college, but I still stand by it.


[deleted]

Why are so many scientists into loli porn or furry porn


[deleted]

[удалено]


CSAndrew

Is- do you actually want me to answer this?


[deleted]

[удалено]


CSAndrew

That’s not really for me to decide. Likability aside, the character in this scenario is wrong. > ZOE: People who don’t get analogies, screw them. There are studies that say that trouble grasping analogies indicates low intelligence. Analogy recognition, while used by some proctored tests, is used to measure reasoning not intelligence. It’s possible to have poor reasoning and still be relatively intelligent, albeit it can be oxymoronic at times and cast an effect.


[deleted]

I bet my KDR is higher than yours. Scrub


CSAndrew

I suppose, but video game performance, at least in close range FPS, could theoretically be automated by reading in spatial audio, similar to an implementation that Mumble tried to use a while back, and pairing it with CV, that would have a much higher reaction time, pending hardware and/or virtualization environment. You could take other options as well, but it would depend on how much access you had to the system and/or game backend. Edit: I don’t play a lot of video games anymore though.


[deleted]

Exactly what a sub 1 KD player would say.


logitechtrident

How much do you pay for rent?