Liam O'Connors (my partner this year) and I have gotten off to a strong start this year's team policy debate season.
At the first online tournament, we made it to elimination rounds, placing 21st out of ~80 teams.
I also placed as the second overall team policy speaker.
The debate resolution for this year focuses on U.S. foreign policy towards Central America, targeting
important issues like migration, smuggling, and corruption.
This'll be a fun second year in NCFCA for me, and I'm looking forward to competing more!
I'm excited to announce I've accepted an internship offer as an
Enterprise Technology Software Engineer @ State Farm for the summer of 2025!
A huge thanks to the people who've helped me along the way: to Dr. Veerasamy &
Professor Gupta @ UTDallas for recommendations & guidance in my CS education,
and especially to my parents for the encouragement and advice.
This is the first summer internship I've taken, and I'm thrilled for this opportunity
to start my journey in the industry!
“While at a conference a few weeks back, I spent an interesting evening with a grain of salt.”
This was one of many nonsensical phrases posted by the user “Mark V. Shaney” on Usenet in the 1980s.
Many readers thought Mark was a deranged person, but he was actually a program designed by Rob Pike.
Pike’s program was an early form of AI which “learned” to construct sentences by calculating which
words often went together in text. Decades later, more sophisticated AI programs like ChatGPT have emerged.
With exponentially larger datasets and more powerful computing resources, these programs are far more capable.
Today’s chatbots can easily fool readers with extremely human-like responses. However, AI programs still
make errors reminiscent of Mark V. Shaney. For example, when asked on September 11th about the date of the
Trump v. Harris presidential debate, ChatGPT responded, “The Trump vs. Harris debate you’re referring to
took place on September 17, 2024.” Why do such advanced programs like ChatGPT still make mistakes like
this? The reason is because they lack reasoning.
The flaw lies in the “machine learning” approach this type of AI follows. It attempts to build and adjust
mathematical functions to model training data and then apply these functions to the given task.
Conceptually, this approach is the same as that which Mark V. Shaney followed. Mark used a Markov chain
to model which words often went together in sentences and then generated new sentences from that information.
The process involves no reasoning; it is purely statistics and cannot be explained. We humans do not guess
at conclusions by using mathematical functions; we deduce conclusions using logic and reason. When we
reason, we understand why our conclusions are valid. Machine learning doesn’t reason, and this lack of
reason raises serious concerns about the reliability of its output. How can we trust a program if we’re
not able to explain what it produces?
A plausible solution lies in “explainable AI” (XAI) which, instead of relying on statistics, models
reasoning directly. This summer, I studied at the UT Dallas Applied Logic & Programming Systems (ALPS)
Lab where we explored how the human thought process could be formalized using the logic programming
language Prolog. By representing knowledge as programmatic facts and rules in Prolog, we could model
an explainable decision process. XAI utilizes this approach: instead of mathematical functions,
XAI uses logic to actually derive conclusions. With this AI, one can actually see the program’s
“thought process”, the facts and rules the program followed to derive its conclusion. XAI has the
potential to be utilized in the construction of much more powerful AI programs.
XAI is already used in practice together with machine learning. “Reliable chatbots” use LLMs to
extract facts which are then passed to an XAI to “understand”. The XAI backend keeps chatbots
in check from hallucinating while correctly answering requests. Professor Gopal Gupta, who leads
the UT Dallas ALPS Lab, presented the “FOLD-SE” algorithm which generates rule sets for XAI to perform
tabular classification tasks, offering an explainable alternative to pure machine learning classification.
Another ongoing project “Rules as Code”, by Jason Morris, aims to use XAI to automate law and legal
services. XAI overcomes the “black-box barrier” of machine learning, opening the door to reliable
automation of decision-making tasks. Prolog programs appeared around the same time as Markov chains,
and we’ve already seen how the latter has grown with modern advancement.
If Mark V. Shaney could grow into ChatGPT, what could advancement in XAI lead to?
Sources:
Mark V. Shaney: https://en.wikipedia.org/wiki/Mark_V._Shaney
General XAI: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence
FOLD-SE: https://arxiv.org/abs/2208.07912
Rules As Code: https://law.mit.edu/pub/blawxrulesascodedemonstration/release/1
Just an update post for my upcoming summer plans! With AP testing done and school finishing up, I'll be moving into
a pretty busy break.
I'll be attending the Applied Logic programming intensive in-person at the University of Texas at Dallas next month,
and following that I'll also be partaking in InspiritAI online. I've started back up with my Coursera ed
and have also started training for the upcoming USACO season.
This year's debate season is over, but the resolutions for next year are already out; we'll be preparing for it
over the summer too. I'm also going to be taking Latin 1B over the summer in preparation for Latin 2 next school year.
I also recently broke 1600 rapid rating on chess.com. We'll be heading to Edmonton in July for the Brick Invitational Hockey Tournament,
where my sister will be playing for the Western Selects.
That's a little bit of what's going on with me for now; I'll probably update this page again later when something interesting happens ;)
Last week I competed in the NCFCA Region 11 Regional Championship tournament, the last of my season.
Again, it was an amazing experience. We debated the best of the best this tournament, and it was really fun overall.
We finished with a record of 3-3 (we never had a losing record this year!), getting close to breaking into Regional elimination rounds.
This is my partner's last year in NCFCA, and I want to thank him for everything; literally couldn't have done it without him!
(William Kuykendall & Alexander Chen @ DBU 2024)
It's been a blessing to have such a fruitful first year, and I can't wait for what's to come!
This past week I've been in Ft. Worth competing in the NCFCA National Mixer hosted there this year.
It was a great experience; my partner and I competed against some of the best teams in our region
whilst also getting to debate teams from other regions as well. We advanced to elimination rounds
again but were eliminated in Double-Octofinals.
Even still, it was a great tournament overall - I've developed a lot and have learned a ton since my
last tournament. Both my partner and I were awarded 30 speaker points in separate rounds (the maximum
number of points available) for the first time! It was such a great experience to debate against
everyone at Ft. Worth - we really enjoyed it, and we're looking forward to the tournaments in the future!
Kind of note-worthy news! I changed one character in my username everywhere!
From "AlxV05" to "AlxV07", links & profiles which could be updated have been updated. Why?
It goes back to why I have my username in the first place.
I first found out I needed my own username ~5 years ago (10 yrs old) while making
some random account I don't have access to anymore. I came up with
"Alx" quick enough, decided I liked version numbers so added the "V0" soon after,
and finally I just had to choose the last character. 5 sounded cool ("al-ks-vee-oh-five").
Nothing special about 5; just sounded cool.
Well I'm mature now ;) and I'd prefer to have my markers have more meaning than just
it "sounding cool". So I looked at my username: Alx still fits; three-char compact
name-like tags are always nice to have around. Second part: the version identifier - I
think I'll keep it, I'm getting updated :D So that means I need to choose a number greater
than 5. 7's the only single-digit number greater than 0 that has two syllables (unique).
I'm also Catholic, and the number 7 symbolises some significant topics: perfection, the
7 days of creation, the 7 cardinal virtues, and many more references from the Bible.
After updating everything it feels kinda nice. It's always good to have a switch-up
every now and then (as long as it isn't too drastic nor does it cause any problems).
Well, I guess till next time. Bye!
Hello! This is my first blog post (Yay!) What does one do with a blog?
I'm not entirely sure, but I think I'm supposed to post about things that
have been happening. So here are some normal things I've been up to recently:
- Training on USACO Silver division problems
- Started Andrew Ng's Machine Learning Coursera course w/ DeepLearning.AI & Stanford Online
- Had some fun with robotics friends developing their scouting application
- Programmed a full-functional String Trie for my UCSD String Algorithms course I'm taking
I think another purpose of having a blog is so that one can share whatever one feels like
sharing at a moments notice with whoever would care to read that one person's blog. I think
every now and then people have that sort of inspiration, but I have yet to experience that.
Ah well, if anything interesting happens I might update this page with a post for that
(I also usually post note-worthy events on my LinkedIn page, so if this page doesn't change for a
while I might have just forgotten to post here as well (but if there isn't anything
new on my LinkedIn then I'm probably just not doing anything note-worthy ;-;)).
But that's it for now, bye!