Post Reply 
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Philosophical Schools?
07-28-2013, 07:27 AM
Post: #1
Philosophical Schools?
I used to be heavily into Hellenic religion and specifically Orphism. The last few months I have been juggling back and forth because I lost faith there could be a god, and have now determined that logically I can not believe in one and even if I could, I could never know it.

But, to the point. I am now an atheist. But I still love philosophy and am looking for some kind of objective non-relitivistic (redundant, I know) moral/ethical code. My problem is that the philosophies I always ascribed to, namely classical greco-roman philosophies, almost always mention god or gods. I am still looking back through to seee if one does not. But, my question to other atheists, could I be presented with some secular philosophies to look through? I want something that I can ascribe to but that I not dependant on religion.

Ps: sorry if a thred like this exists and sorry for all the typos. My keyboard has been sticking lately and much as I try to fix them I always miss some.

Ištu dumqim amqut, u anaku anmiq
Find all posts by this user
Quote this message in a reply
07-28-2013, 08:53 AM (This post was last modified: 07-28-2013 08:58 AM by legend.)
Post: #2
RE: Philosophical Schools?
You might be interested in reading Sam Harris' book The Moral Landscape. Or you can start with this video to see if you like the general direction he goes.





Sam's position is fundamentally consequentialist and, in general, I would suggest reading more on consequentialist systems such as preference utilitarianism. These system typically rely on a more objective system of reasoning and consequentialism is the paradigm that I adopt.

I also gave you my 2 axioms for ethics in another thread, which again, are:

1) If an agent's behavior is ethical, then it is rationally justifiable for the agent to behave that way.

2) If an agent's behavior is ethical, then its rational justification cannot be contingent on an appeal to malicious values.


I also rigorously ground these axioms in stochastic games formalisms (which are formalisms in Game Theory), but that is the more simple english expression of them.


I'll add that I've recently written (elsewhere on the internet) quite a bit on my consequentialist ethical framework and I can provide some of that here if you'd like to see it, but it's fairly dense in formalism and depending on how deep you want to explore it, it might be worthwhile to have a more casual discussion about the english version of the axioms first to see if you're interested.
Find all posts by this user
Quote this message in a reply
07-28-2013, 09:02 AM
Post: #3
RE: Philosophical Schools?
(07-28-2013 08:53 AM)legend Wrote:  1) If an agent's behavior is ethical, then it is rationally justifiable for the agent to behave that way.

2) If an agent's behavior is ethical, then its rational justification cannot be contingent on an appeal to malicious values.


I also rigorously ground these axioms stochastic games formalisms (which are formalisms in Game Theory), but that is the more simple english expression of them.


I'll add that I've recently written (elsewhere on the internet) quite a bit on my consequentialist ethical framework and I can provide some of that here if you'd like to see it, but it's fairly dense in formalism and depending on how deep you want to explore it, it might be worthwhile to have a more casual discussion about the english version of the axioms first to see if you're interested.
I think it would do well to discuss it first. I tend to go rather deep when i study philosophy, but if I have no interest in a particular one I do not. So, I suppose I should give what I take these to me.

If you commit an action, you should first think on that action, conidering whether or not it, both in short and long run, will be beneficial to yourself and those arround you, because if not the action is in no way rational.

And 2nd, the act can not have the ends of harming another. Even if it would benefit you, you should not have the goal of harming, directly or indirectly, a third party.

Am I on track so far? Or have I missed or altered the point?

Ištu dumqim amqut, u anaku anmiq
Find all posts by this user
Quote this message in a reply
07-28-2013, 09:27 AM (This post was last modified: 07-28-2013 09:31 AM by legend.)
Post: #4
RE: Philosophical Schools?
(07-28-2013 09:02 AM)Achrelos Wrote:  I think it would do well to discuss it first. I tend to go rather deep when i study philosophy, but if I have no interest in a particular one I do not. So, I suppose I should give what I take these to me.

If you commit an action, you should first think on that action, conidering whether or not it, both in short and long run, will be beneficial to yourself and those arround you, because if not the action is in no way rational.

And 2nd, the act can not have the ends of harming another. Even if it would benefit you, you should not have the goal of harming, directly or indirectly, a third party.

Am I on track so far? Or have I missed or altered the point?


At a minimum, very close, but maybe not quite; I'll make some clarifications, but it's possible you have understood correctly already. For the first, I would also state even a person who has tried to reason out a solution may not arrive at the ethically optimal solution due to faults in reasoning, assumptions, or approximations that they had to make. However, a well intentioned person would certainly be open to refining their course of action if someone could present the objective reason why their previous conclusion was incorrect.


For the second, you may also have this right, but it's important to note that it does not prohibit harming another indirectly. In fact the most ethical course of action you can take in a situation may require harming another. But it does indeed state that the harm of the other cannot be the goal and end itself. In the previous thread, you mentioned defending yourself, and this is absolutely ethically permitted under axiom 2, because the goal is not to harm the other, but to protect yourself.
Find all posts by this user
Quote this message in a reply
07-28-2013, 10:08 AM (This post was last modified: 07-28-2013 10:09 AM by Achrelos.)
Post: #5
RE: Philosophical Schools?
1.) Point taken, certainly peoples reasoning is not ever perfect. But then, I suppose that that lead into the further discussion of what good reasoning is and is not and how one should reason/rationalize things.

2.) When I said indirectly I meant maybe not causiing bodily harm but still acting maliciously. For instance, theft causes no direct harm to a person, but indirectly may cause issues in the near or far future that will lower that persons conditions. I do understand though, that harm is not always malicious, so long as it is both minimized (I think you said that somewhere?) And not the purpose of the action.

Are there any additional points to these you think I may have missed or that are simmply noteworthy to ensure understanding?

Ištu dumqim amqut, u anaku anmiq
Find all posts by this user
Quote this message in a reply
07-28-2013, 10:29 AM
Post: #6
RE: Philosophical Schools?
(07-28-2013 10:08 AM)Achrelos Wrote:  1.) Point taken, certainly peoples reasoning is not ever perfect. But then, I suppose that that lead into the further discussion of what good reasoning is and is not and how one should reason/rationalize things.

2.) When I said indirectly I meant maybe not causiing bodily harm but still acting maliciously. For instance, theft causes no direct harm to a person, but indirectly may cause issues in the near or far future that will lower that persons conditions. I do understand though, that harm is not always malicious, so long as it is both minimized (I think you said that somewhere?) And not the purpose of the action.

Are there any additional points to these you think I may have missed or that are simmply noteworthy to ensure understanding?


Okay cool. I think you understand the intent of the axioms at this point, well enough to proceed with any questions or challenges you have with them, at least. Or if you agree that they are acceptable axioms we could also discuss what they mean in more detail, if you're interested.


Regarding how the optimal reasoning should be performed, that is why I ground things in stochastic games formalisms, because it gives a very rigorous basis for analysis. I should note that in our complex world I doubt we'll be able to easily perform absolute perfect reasoning in these systems unless the situation is trivially local with a lot of background knowledge, but it gives a basis for how to reason, reveals assumptions a person is making, and any approximations they're making, thereby opening the door for refinement.
Find all posts by this user
Quote this message in a reply
07-28-2013, 11:38 AM (This post was last modified: 07-28-2013 11:44 AM by legend.)
Post: #7
RE: Philosophical Schools?
EDIT: Am I insane here, or was there another post here just a moment ago that said you did want to discuss...

I have a response if so, but I didn't want to ramble off if you retracted that comment or I was reading something else and confused it with this thread Tongue

EDIT 2: I just scanned my cache and there was indeed another post here that said you wanted to discuss it further. If you retracted that desire, let me know. If something weird happened to the post and you still do, then also let me know about that and I'll post my elaboration.
Find all posts by this user
Quote this message in a reply
07-28-2013, 01:10 PM
Post: #8
RE: Philosophical Schools?
You are not crazy. I am sort of on a portable device right now, and lacking time. I began typing a response and accidently hit the post reply button. I deleted it because I posted it incomplete and didn't have time to fully answer. I will have time to fully respond soon but wanted to assure you that you are not crazy. Big Grin

I am interested though, and if you had a direction to go the by all means continue.

Ištu dumqim amqut, u anaku anmiq
Find all posts by this user
Quote this message in a reply
07-28-2013, 01:23 PM
Post: #9
RE: Philosophical Schools?
(07-28-2013 01:10 PM)Achrelos Wrote:  You are not crazy. I am sort of on a portable device right now, and lacking time. I began typing a response and accidently hit the post reply button. I deleted it because I posted it incomplete and didn't have time to fully answer. I will have time to fully respond soon but wanted to assure you that you are not crazy. Big Grin

I am interested though, and if you had a direction to go the by all means continue.


Haha okay. That's understandable.

Below is the response I was going to make, but if you'd like to address something other than I what address below, don't feel bad to redirect it Smile








meBeforeEdit Wrote:For brevity of speech, let me first start by saying that I call my system of ethics that is based on those two axioms Value Functionalism (VF).

One thing that I think that should probably be highlighted about VF that is different from many other consequentialist ethical systems is that it does not actually specify what amount of concern a person must have for others to be ethical. In particular, this is a major difference between VF and utilitarian ethics. That is, utilitarian ethics is typically predicated on the notion that however you evaluate the utility of individuals, what is ethical is a maximization that shows equal concern for everyone. VF, in contrast, permits self interested agents, agents with some degree of compassion for others (but perhaps not equal to their concern for themselves), agents that do have equal concern for everyone, and agents that even have more concern for others then themselves. All possible ranges are supported as ethical under VF. Where each person falls is entirely contingent upon themselves and their own values. Personally, I'd say I fall somewhere between self-interest and equal concern for everyone. That is, I do have an innate compassion for all other people, but I think I value my own well-being slightly more than any other single person (with possible exception for my wife, but I may simply value her more instrumentally rather than innately because I'd suffer greatly without her).


The reason VF does not make strong requirements on the level of compassion is because to assert that what is ethical requires a specific level of compassion is very abusive of the is-ought problem and further, despite VF's broad acceptance of large ranges of permissible compassion (or lack thereof), there are still some situations which in effect have universal ethical conclusions.

I'm guessing you're probably familiar with the is-ought problem, but if not, the idea is that you generally cannot derive an "ought" claim from merely factual statements (is) of the world. However, once a set of values or goals are established to exist, how you ought to behave follows from them (because some behavior will better maximize/achieve them than others). The problem with utilitarian systems asserting that what is ethical is to have compassion for everyone equal to yourself is that it is asserting the "ought" of values. However, if ought can only be arrived at once a person has values, then you cannot assert this claim.

A person might argue that VF is also somewhat guilty of making ought claims via Axiom 2 which excludes malicious values. However, if we do not include axiom two, then all we're talking about is rational behavior, which means the word "ethics" has no meaning. Moreover, if ethics has to mean something, it seems most reasonable that it not mean behavior that motivated by malice towards others. The word "evil" already describes that class of behavior and while a person is perfectly welcome to discuss what is the optimally evil behavior, I think it's fair to say that when we're trying to discuss ethics that we're not trying to discuss that. Therefore, I don't think axiom 2 is so much a violation of is-ought as it is simply a matter of semantics.


With that in mind, I can also elaborate a bit on what I mean by there still being the possible existence of universal ethical conclusions despite the large permissible range of compassion. Specifically, when a course of action would be optimal both when an agent was completely self-interested and when the agent had any amount of compassion for other agents, then we can say that the conclusion is in a way a universal moral imperative, because it remains the best course of action for all possible ranges of compassion. Conversely, and perhaps more common, is that we can say that one course of action is universally bad when it is suboptimal for any level (or lack thereof) of compassion. Sam Harris discusses an intuitive example of the latter in his book the Moral Landscape (while my views are not quite the same as his I agree with him on a number of matters). Specifically, imagine you are one of only two people in the world and you do not know the other. While there are any number of ways you could begin to interact with the other, one way which seems pretty universally bad is to immediately respond by smashing in that person's face with a rock. And the reason that is bad, is even if you are self interested, you are immediately removing any chance to work together and help each, which would be better than facing the world alone (this is the reason people banned together in early civilization to begin with!)

For the former case when optimal conclusions is the same irrespective of your degree of compassion, these can occur trivially when the Nash Equilibrium of a problem results in the best outcome for all parties involved, or when the immediately self-interested action can result in a negative behavior with regards to how others respond to you in the future. This situation is actually studied reasonably well in evolutionary behavior and is typically referred to as "reciprocal altruism." I can also give a more mathematical example of when this occurs, but I've already said a great deal, so it's probably a good idea to pause here to let you ask questions or challenge parts of my argument that you might not think are correct.
Find all posts by this user
Quote this message in a reply
07-28-2013, 03:40 PM
Post: #10
RE: Philosophical Schools?
Alright, that was a lot. I think it would be best to address that point by point, if you're up for it.

One thing I think I understood from that is that what is being said is that if it benefits the agent and minimizes damage it is ethical. But, no specific amount of care necessarily has to be given to any outside party, because by minimizing damages and avoiding malicious intent the agent has already fulfilled its moral obligation to outsiders. Extra care can be given based on the judgement of the agent, because extra care could in many cases be rationally justified. Is that a fair statement?

It also sort of addressed a question I had. If the goal set is merely survival for the individual agent, then could anything be ethical? Though this has kind of been awnsered, because of the second principal, the question was worth bringing up, because it addresses the fact that a vague goal like survival can have an act be ethical where another goal may not.

Ištu dumqim amqut, u anaku anmiq
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)